The WiDE workshop aims to bring together researchers working on all aspects of distributed workflows. Managing distributed workflows is a complex task which covers a broad range of diverse tasks and their need to interoperate: design patterns and languages, orchestration tools, performance monitoring, benchmarking procedures, distributed FAIRness, end-to-end privacy and security, and many more. In the same spirit that inspired the Workflow Community Initiative, the WiDE workshop will allow researchers to share their knowledge on specific aspects of the topic and gain insights from different points of view. Plus, direct exchange of views and ideas will be further encouraged by an open discussion section at the end of the event.
Workflow models represent a powerful abstraction for designing complex applications and executing them on large-scale distributed architectures, such as HPC centers, Grid environments, Cloud infrastructures, and the Continuum. However, modeling, orchestrating, and monitoring distributed workflows poses unique challenges, which are still open research questions.
When considering data-oriented workflows, all the aspects of data management become crucial for performance optimization, privacy preservation and security. Modern programming paradigms for in-situ workflows and Big Data analysis focus on data locality: moving computation closer to the data to remove data transfer overheads and risks. More recently, the successful adoption of federated learning approaches to data analytics shifted the support for distributed environments from a scalability-related feature to an unwaivable requirement, as data cannot be moved by contract.
Besides, there are scenarios in which it is worth, or even unavoidable, to transfer data between different modules of a complex application. Integrating large-scale simulations with deep-learning workloads or quantum-based optimization processes already requires orchestrating modular applications on heterogeneous distributed environments. The HW/SW end-to-end co-design trend cannot but exacerbate heterogeneity in the following years. Flexible fault-tolerant strategies, end-to-end secure and privacy-preserving communication protocols, and location-aware provenance standards are needed to build a mature ecosystem for future workflows. Community-agreed benchmarking procedures and tools are necessary to push research and development in the coming years.
Scope of the workshop:
Topics of interest include, but are not limited to, the following:
- Benchmarking methodologies and tools for distributed workflows
- Design patterns for parallel and distributed workflows
- Distributed provenance and FAIRness in distributed workflows
- Fault-tolerant distributed workflows
- Federated and privacy-preserving workflows
- I/O orchestration for distributed workflows
- Practical application of distributed workflows
- Scheduling workflows in heterogeneous environments
- Workflows for AI and AI for workflows
- Workflows for the Convergence of HPC, Big Data and ML
- Workflows in the Computing Continuum
- Workflow models, languages and tools for heterogeneous distributed systems
The WiDE workshop promotes Open Science. Publishing Open Access data, code, and article preprints on arXiv.org, TechRxiv.org, or any not-for-profit preprint server approved by the Publication Services and Products Board (under the conditions laid down by the IEEE post-publication policies) does not in any way prevent the submission to the WiDE workshop.