I am an Associate Professor in the Computer Science Department at the University of Crete, and a Guest Researcher at the Computer Architecture and VLSI Systems Laboratory of the Institute of Computer Science of the Foundation for Research and Technology - Hellas. My research interests include Concurrency and Programming Languages and Data Analytics.



My research aims to develop tools and systems for the development and efficient execution of parallel and distributed software. I am interested in programming languages, software engineering, focusing on concurrent and distributed software.


I am a Work Package leader in the EuroEXA project, where I also work on programming models and runtime systems that facilitate the development, distribution, and scaling of exascale applications to novel hardware. In the ETAK project, I work on automating the deployment and efficient scheduling of earthquake and building simulations as services hosted in the cloud.

Past Projects

I coordinated Project ASAP, which focused on designing, developing, modeling, and efficiently scheduling big data analytics workflows. The project finished successfully and was rated ``excellent''. Myrmics, BDDT, and PARTEE are task-parallel runtimes for distributed-memory, non cache coherent shared memory and cache coherent shared memory multicore processors that scale parallel applications to tens or hundreds of cores. GreenVM is a Java Virtual Machine for distributed memory manycores running on the 512-core Formic prototype.

Software Fault Tolerance

An error in a single core of a 64-core or 512-core parallel processor should not cause the whole system to malfunction. We design and build software systems that can recover from soft-errors and hard-errors, focusing on multicore processors. We are using traditional techniques like checkpointing and redundancy, combining them with high-level abstractions like task-parallel programming that help make fault-tolerance very efficient.

Static Analysis and Compilation

We implemented several static analyses that take advantage of high-level abstractions in parallel programming languages like regions and tasks, to efficiently compile these programs to multicore processors and distributed systems.


Students, Supervised or Co-Supervised