Lecture Descriptions
A deep dive into sustainable generative AI and HPC systems by Andrea Bartolini
The lecture will cover the foundations of sustainable computing, ranging from sand to large-scale systems targeting generative AI and High-Performance Computing (HPC). It will provide a deep dive into their building blocks and their associated environmental cost with specific emphasis on RISC-V based systems and practical considerations. Lab activities will include pen-and-paper calculations and hands-on experience on the RISC-V Monte Cimone computing cluster.
Barcelona Supercomputing Centre Director's final keynote by Mateo Valero
Lecture description to come.
Building and Evolving a Multilingual LLM: From First Steps to Specialization by Javier Aula Blasco and Júlia Falcão
The first part of this lecture will cover the process of training a multilingual Large Language Model (LLM) from scratch, starting from data collection up to the release of a first version of base and instructed variants. The focus will be on what it takes for a model to learn how to generate human-like text, the methods we can use to evaluate their performance, and the inherent limitations of these models, especially when trained on open and legally-acquired data.
The second part of the lecture will tackle what comes after the release of the first version. We will describe how language models can be continually evaluated and improved both in terms of their capacities as well as their safety. We will also cover the process of aligning the model with human feedback, talk about how a generalist language model can be used as a starting point for more specific purposes such as usage in the scientific domain, and consider the challenges the evaluation and deployment of these domain-specific models present.
Distributed Data Analytics for AI in Supercomputing Systems by Josep Lluís Berral
Distribution of data processing is a requisite for modern analytics and machine learning applications, where High-Performance Data Analytics leverages from data/model parallelism frameworks, which, at the same time, can leverage from High Performance Computing infrastructures. This course introduces distributed analytics and streams, through frameworks like Apache Hadoop and Spark, along virtualization and containerization platforms that allow us to scale such frameworks in supercomputing environments.
Expedited development of novel RISC-V instructions through an emulation-simulation framework
Given the widespread adoption of Artificial Intelligence (AI) by notable industry players like Google, Huawei, NVidia, Intel, and others, a significant opportunity emerges for software developers and hardware architects to conceive and present innovative solutions readily integrable into commercial products. Furthermore, the advent of open source Instruction Set Architectures, such as RISC-V, allowing for seamless augmentation with new instructions to HPC and AI systems, which aim at expediting various domain-specific algorithms, thereby inviting computer architecture researchers to propose inventive instructions conducive to achieving the aforementioned objective. However, the process of designing and assessing new instructions for RISC-V presents a substantial time investment due to the current methodologies and simulation tools employed for evaluation. Notably, computer architecture and RTL simulators, such as gem5 and Verilator, entail protracted simulation times, rendering them inefficient for the initial phases of instruction development characterized by iterative program versioning and integration of new instructions.
Our objective is to enhance the efficiency of designing and evaluating new instructions within the RISC-V ISA, thereby reducing the entry barriers for researchers seeking to apply RISC-V to their respective domain-specific challenges. We will provide a comprehensive explanation about our top-to-bottom evaluation framework, which enables quick design and evaluation of new RISC-V instructions going through software stack integration to performance simulation. The instructional program will comprise a blend of informative lectures and interactive lab sessions. Initially, participants will receive a comprehensive overview of diverse tools essential for both software and hardware development tailored to platforms supporting the RISC-V ISA. Subsequently, we will delve into the specifics of our evaluation framework, elucidating the utilization of one ISA emulator (QEMU) alongside a computer architecture simulator (gem5). Furthermore, we will elucidate the integration methodology of these tools to expedite software development of new instructions via ISA emulators and to assess the performance of the developed instructions using computer architecture simulators. To facilitate hands-on learning, we will furnish our evaluation framework through a docker image, encompassing the aforementioned tools, along with requisite wrappers and scripts for seamless integration. Additionally, practical exercises and example code will be provided during the lab sessions, enabling attendees to gain firsthand experience in the design, integration, and evaluation of new RISC-V instructions using our framework. Ultimately, all participants will depart equipped with the evaluation framework, code samples, and the necessary proficiency to employ them as foundational elements in their individual research endeavor.
Harnessing HPC and AI for Interactive Analysis and Visualization of Large Scientific Datasets by Michela Taufer
Computing is ubiquitous, present in the cloud, clusters at our institutions, and even in our laptops. However, the significant challenge remains the management of vast amounts of data, often generated remotely at the edge using experimental facilities or supercomputers in large national laboratories. When dealing with such vast data stored in various public and private remote locations, moving data from remote facilities to our desktop is impractical. Scientists dealing with this data often prefer to review it remotely before transferring only specific portions for closer AI-based analysis and visualization. Each step of this process is challenging: streaming the data, identifying and deploying tools for data analysis and visualization, interacting dynamically with the data, and exploring multiple datasets simultaneously.
This lecture addresses this significant challenge in HPC by presenting solutions to scientists’ need to interactively deploy efficient AI-based analysis and visualization tools for large scientific datasets. The lecture demonstrates how a data service initiative such as the National Science Data Fabric (NSDF) enables accessible, flexible, and customizable workflows for multi-faceted analysis and visualization of various datasets. The lecture walks through the workflow steps of generating large datasets through modular applications, storing this data remotely, and using AI to analyze the data locally to draw scientific conclusions. NSDF services allow users to stream data from public storage platforms like DataVerse or private storage platforms like Seal Storage and access an easy-to-use NSDF dashboard for immediate interaction with data.
The lecture highlights how to navigate every step of a modular workflow, efficiently handling different data formats for streaming, and using AI methods and visualization tools for scientific inference on selected data subsets. The lecture applies this new knowledge to experimental datasets in earth science, material sciences, and more use cases. The lecture equips participants with the skills to utilize data services for comprehensive scientific data analysis. It guides them through creating flexible workflows, managing data across various storage solutions, and deploying data visualization and analysis tools. Attendees will learn to manage substantial datasets and incorporate them into their applications, facilitating better access to data and advancing scientific exploration.
Introduction to Quantum Computing and Qilimanjaro
Conventional computers are reaching their limits. Complex simulation problems of the world around us in the fields of chemistry, physics, and materials, or optimization problems prevalent in the industry, do not find efficient solutions with current technology. Moreover, the growing data processing requirements of the digital revolution, now even more so with the inclusion of Artificial Intelligence, make a new computing paradigm, more powerful and sustainable, necessary. In this talk, we will explore the fundamentals of quantum computing, its advantages, applications, and challenges, as well as Qilimanjaro's efforts towards this new and promising technology and its goal of creating the first Quantum Data Center in Europe in the heart of Barcelona to solve problems that, today, cannot be resolved.
Invited Talk by Luca Benini
Lecture description to come.
RISC-V prototypes for HPC: Maturity and Methods by Pablo Vizcaino
In this lecture we introduce the RISC-V Instruction Set Architecture (ISA), highlighting the importance of a standard in computer architecture and describing the peculiarities and rational behind this ISA. We will see the different extensions of RISC-V, with a special focus on the Vector extension, which is adopted in HPC and AI acceleration solutions. We will also analyze the RISC-V based CPUs and accelerators that are currently available or under development, including a long-vector accelerator developed within the the European Processor Initiative (EPI). This lecture contains the analysis of this architecture and its main differences against other long-vector architectures, alongside the presentation of the Software Development Vehicles (SDV) methodology introduced by the Barcelona Supercomputing Center to develop software for this accelerator. This SDV methodology is composed by RISC-V platforms, compilers, software and hardware emulators, and performance measuring and analysis tools, which will show students how to develop and run complex codes on this architecture.
Workflows for HPC & AI by Rosa Badia
With Exaflop systems already here, High-Performance Computing (HPC) involves everytime larger and complex supercomputers. At the same time, the user community is aware of the underlying performance and eager to leverage it by providing more complex application workflows to leverage them. What is more, current application trends aim to use data analytics and artificial intelligence combined with HPC modeling and simulation.
However, the programming models and tools are different in these fields, and there is a need for methodologies that enable the development of workflows that combine HPC software, data analytics, and artificial intelligence. PyCOMPSs is a parallel task-based programming in Python. Sequential Python programs decorated with simple annotations are executed in parallel in HPC clusters and other distributed infrastructures. PyCOMPSs has been extended to support tasks that invoke HPC applications and combine them with Artificial Intelligence and Data analytics frameworks.
The lecture will be composed of two parts. The first part will consist of a presentation about how this hybrid HPC-AI workflows are developed, illustrated with examples from the CAELESTIS project between others. The lecture will have a special emphasis on the PyCOMPSs programming model and the dislib machine learning library. Also, the lecture will explain how the deployment of the workflows is performed with containers and executed with container engines supported in HPC systems.
The lecture will include a hands-on session on programming and executing PyCOMPSs workflows. In the hands-on session the students will be able to work with an example that combines execution of HPC simulations with machine learning methods implemented with the dislib library and to execute them in the MareNostrum 5 system.