DFM2019-06-28T13:52:11+00:00

DFM 2019: The 8th IEEE International Workshop on Data Flow Models and Extreme-Scale Computing

This workshop is organized as part of the activities of the IEEE Computer Society Dataflow STC.

DFM 2019 Program

DFM 1: The 8th IEEE International Workshop on Data Flow Models and Extreme-Scale Computing
Friday July 19, 1:00 – 2:30
Location: AMU 163

Session Chair: Stéphane Zuckerman, Université Paris-Seine, Université de Cergy-Pontoise, ENSEA, CNRS

Invited Talk: Dataflow: Best of Times or Worst of Times?
Erik Altman, IBM

Toward A High-Performance Emulation Platform for Brian-Inspired Intelligent Systems – Exploring Dataflow-Based Execution Model and Beyond
Sihan Zeng, Jose Monsalve and Siddhisanket Raskar

A Disparity Computation Framework
Gabriel Vieira, Fabrizzio Soares, Junio Lima, Gustavo Laureano, Ronaldo Costa, Hugo Nascimento and Júlio Ferreira

DFM 2: The 8th IEEE International Workshop on Data Flow Models and Extreme-Scale Computing
Friday July 19, 3:00 – 4:15
Location: AMU 163

Session Chair: Jean-Luc Gaudiot, University of California at Irvine, USA

Position Paper: Extending Codelet Model for Dataflow Software Pipelining using Software-Hardware Co-design
Siddhisanket Raskar, Thomas Applencourt, Kalyan Kumaran and Guang Gao

A Functional Programming Model for Embedded Dataflow Applications
Christoph Kühbacher, Christian Mellwig, Florian Haas and Theo Ungerer

An Evaluation of An Asynchronous Task Based Dataflow Approach for Uintah
Alan Humphrey and Martin Berzins

DFM 3: The 8th IEEE International Workshop on Data Flow Models and Extreme-Scale Computing
Friday July 19, 4:15 – 5:30
Location: AMU 163

Session Chair Stéphane Zuckerman, Université Paris-Seine, Université de Cergy-Pontoise, ENSEA, CNRS

Panel: Execution and Programming Models: Extreme Scale and Beyond
Chair: Stéphane Zuckerman, Université Paris-Seine, Université de Cergy-Pontoise, ENSEA, CNRS
Panelists: Erik Altman, IBM, USA
Hironori Kasahara, Waseda University, Japan
Karthikeyan Sankaralingam, U. of Wisconsin, USA,
Jean-Luc Gaudiot, U. of California, Irvine, USA
Guang R. Gao, U. of Delaware, USA

Call for Papers

Computer systems, both for high-performance and embedded computing, have now fully embraced parallelism at the hardware and software levels. From the HPC systems viewpoint, new challenges have arisen, which are common issues in the embedded world: power and energy efficiency are now major issues to be overcome when considering building efficient supercomputers. Conversely, harnessing true parallel systems is now necessary to efficiently exploit embedded systems equipped with multiple cores. Moreover, fault-tolerance and resiliency must also be taken into consideration, at both the hardware and software level. Finally, many such systems (both embedded and HPC) are networked together, forming extremely large distributed and parallel systems. Dataflow-inspired models of computation, once discarded by the sequential programming crowd, are again considered serious contenders to help increase programmability, performance, and scalability in highly parallel and extreme scale systems. By their very nature, dataflow and event-driven inspired models tend to naturally solve (if only partially) some of the newer problems related to power and energy efficiency, or provide fertile ground to help with implementing efficient fault-tolerance and resiliency mechanisms, as many of the required properties are enmeshed in the models themselves. Yet, to achieve high scalability and performance, modern computing systems, both HPC and embedded, rely on heterogeneous means to carry out computations: GPUs, FPGAs, etc. Meanwhile, legacy programming and execution models, such as MPI and OpenMP, add asynchronous and data-driven constructs to their models, all the while trying to take into account the very complex hardware targeted by parallel applications. Consequently, programming and execution models, trying to combine both legacy control flow-based and data flow-based aspects of computing, have also become increasingly complex to handle. Developing new models and their implementation, from the application programmer level, to the system level, down to the hardware level is key to provide better data- and event-driven systems which can efficiently exploit the wealth of diversity that composes current high-performance systems, for extreme scale parallel computing. To this end, the whole stack, from the application programming interface down to the hardware must be investigated for programmability, performance, scalability, energy and power efficiency, as well as resiliency and fault-tolerance. All these aspects may have a different impact on high-performance computing and embedded systems.

Scope of the workshop:

Researchers and practitioners all over the world, from both academia and industry, working in the

areas of language, system software, and hardware design, parallel computing, execution models, and resiliency modeling  are invited to discuss state of the art solutions, novel issues, recent developments, applications, methodologies, techniques, experience reports, and tools for the development and use of data flow models of computation. Topics of interest include, but are not limited to, the following:

  • Programming languages and compilers for existing and new languages — in particular single-assigned and functional languages
  • System software: Operating systems, runtime systems
  • Hardware design: ASICs and reconfigurable computing (FPGAs)
  • Resiliency and fault-tolerance for parallel and distributed systems
  • New data flow inspired execution models — in particular strict and non-strict models
  • Hybrid system design for control-flow and data-flow based systems
  • Position papers on the future of data flow in the era of parallel and distributed many-core systems, and beyond, including heterogeneous systems

DFM Organizer

Stéphane Zuckerman, Laboratoire ETIS — Université Paris-Seine, Université de Cergy-Pontoise, ENSEA, CNRS
Email: stephane.zuckerman@u-cergy.fr

Program Committee

Skevos Evripidou, University of Cyprus
Guang Gao, University of Delaware
Jean-Luc Gaudiot, University of California at Irvine
Vivek Sarkar, Rice University
Ian Watson, University of Manchester
Kei Hirake, University of Tokyo
David Abramson, Monash University
Costas Kyriacou, Frederic University
Pedro Trancoso, University of Cyprus
Kyriacos Stavrou, Intel Labs Barcelona
John Feo, Pacific Northwest National Laboratory
Bob Iannucci, CMU, Silicon Valley
Wallid Najjar, University of California, Riverside
Wolfgang Karl, Karlsruhe Institute of Technology
Mark Oskin, University of Washington
Andrew Sohn, NJIT, USA
Reiner Hartenstein, TU Kaiserslautern
Kemal Ebcioglu, Global Supercomputing Corp.
Kevin Hammond, University of St. Andrews
Roberto Giorgi, University of Sienna
Robert Clay, Sandia National Labs
Sven-Bodo Scholz, Heriot-Watt University
Krisha Kavi, University of North Texas
Yong Meng Teo, National University of Singapore