Keynote Speakers

Jack Dongarra

The University of Tennessee, Oak Ridge National Laboratory, and University of Manchester

See Bio

An Overview of High Performance Computing and Future Requirements


In this talk we examine how high performance computing has changed over the last ten years and look toward the future in terms of trends. These changes have had and will continue to impact our numerical scientific software significantly. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed, and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile-time and run-time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run-time environment variability will make these problems much harder.

Specialization vs. Abstraction: Parallel Programming Perspectives


While different kinds of faster and more complex accelerators compete with more and more sophisticated “classical” multi/many cores shared and distributed memory architectures, new and different parallel programming models and frameworks have been proposed with the aim of improving programmer efficiency, implementing faster applications and efficiently targeting different, heterogeneous hardware architectures. High level programming abstractions usually improve programmer experience and code portability. On the other hand, specialized programming abstractions usually improve efficiency of the application code. In front of a few de facto standard programming models used for the majority of HPC applications, we discuss how abstraction and specializations may be synergically used at different levels of a modern parallel programming toolchain to keep both the advantages relative to programmability and portability—not only across different architectures with similar amounts of parallel resources, but also across similar of different architectures with sensibly different amounts of parallel resources to be exploited—and efficiency of applications.

Marco Danelutto

Universitá di Pisa

See Bio

Ivona Brandic

Vienna University of Technology

See Bio

Challenges in the Design of Hybrid Classic-Quantum Systems

As data volumes are growing faster than the computing power, the computer science community is forced to look for alternatives beyond von Neumann architecture. Among different architectures that are currently being developed, Quantum Computing is one of the most promising ones.
In this talk we discuss the concept of hybrid Classic-Quantum architecture and challenges when executing an application on a hybrid computational continuum where parts of the application are executed on the classic machine and parts of the application are executed on the quantum machine. We discuss the problems and challenges caused by the complexities of noise, hyperparameter optimization and data encoding.

Exciting news! 🌟 Mark your calendars for #PDP2024, coming to you from the dynamic city of Dublin 🇮🇪 on March 20-22, 2024, hosted by the Cloud Competency Centre at the National College of Ireland. Don't miss out on this tech-savvy event! 🚀 #TechConference #DublinEvents

Load More
Sponsors and Supporters
National College Ireland Digital4Business Digital Technology Skills Matrix Internet Failte Ireland