11th IEEE International Workshop on Data Flow Models and Extreme-Scale Computing (DFM 2022)
Goal of the workshop:
The workshop aims to highlight advancements to event-driven and data-driven models of computation for extreme scale computing, parallel and distributed computing for high-performance computing, and high-end embedded and cyber-physical systems. This includes both hardware and software levels of computer systems. It also aims at fostering exchanges between dataflow practitioners, both at the theoretical and practical levels.
With the advent of true many-core systems, it has become unreasonable to solely rely on control-based parallel models of computation to achieve high scalability. Dataflow-inspired models of computation, once discarded by the sequential programming crowd, are once again considered serious contenders to help increase programmability, performance, and scalability in highly parallel and extreme scale systems, but also power and energy efficiency, as they (at least partially) relieve the parallel application programmer from performing tedious and perilous synchronization bookkeeping, but also provide clear scheduling points for the system software and hardware. They are also an invaluable tool for high-end embedded computing to deal with real-time constraints. However, to reach such high scalability levels, extreme scale systems rely on heterogeneity, hierarchical memory subsystems, etc. Meanwhile, legacy programming and execution models, such as MPI and OpenMP, add asynchronous and data-driven constructs to their models, all the while trying to take into account the very complex heterogeneous hardware targeted by parallel applications. Consequently, programming and execution models, trying to combine both legacy control flow-based and dataflow-based aspects of computing, have also become increasingly complex to handle. Developing new models and their implementation, from the application programmer level, to the system level, down to the hardware level is key to provide better data- and event-driven systems which can efficiently exploit the wealth of diversity that composes current high-performance systems, for extreme scale parallel computing. To this end, the whole stack, from the application programming interface down to the hardware must be investigated for programmability, performance, scalability, energy and power efficiency, as well as resiliency and fault-tolerance.
Scope of the workshop:
Researchers and practitioners all over the world, from both academia and industry, working in the areas of programming languages, system software, hardware design, parallel computing, execution models, and resiliency modeling are invited to discuss state of the art solutions, novel issues, recent developments, applications, methodologies, techniques, experience reports, and tools for the development and use of dataflow models of computation. Topics of interest include, but are not limited to, the following:
- Programming languages and compilers for existing and new languages — in particular single-assigned and functional languages
- System software: Operating systems, runtime systems
- Hardware design: ASICs and reconfigurable computing (FPGAs)
- Resiliency and fault-tolerance for parallel and distributed systems
- New dataflow inspired execution models — in particular strict and non-strict models
- Hybrid system design for control-flow and data-flow based systems
- Position papers on the future of dataflow in the era of many-core systems and beyond
Likely participants: Computer engineers and computer scientists, parallel computing and compiler researchers for high-performance computing.