Invited Talks

Keynote Talks

Molecular Computing by Luca Cardelli

Digital computers allow us to manipulate information systematically, leading to recent advances in our ability to structure our society and to communicate in richer ways. They also allow us to orchestrate physical forces, transforming and optimizing our manufacturing processes. What they cannot do very well, is to interact directly with biological organisms or in general orchestrate molecular arrangements. Thanks to biotechnology, nucleic acids (DNA/RNA) are particularly effective 'user-programmable' entities at the molecular scale. They can be directed to assemble nano-scale structures, to produce physical forces, to act as sensors and actuators, and to do general computation in between. We will be able to interface this new class of devices with biological machinery, to detect and cure diseases at the cellular level under program control. The theory of computability directed the design of digital computers, and it can now inform the development of new computational fabrics, at the molecular level, that will eventually give us control of an entirely new domain of reality.

Machine Learning for Healthcare by Mihaela van der Schaar

Medicine stands apart from other areas where machine learning can be applied. While we have seen advances in other fields with lots of data, it is not the volume of data that makes medicine so hard, it is the challenges arising from extracting actionable information from the complexity of the data. It is these challenges that make medicine the most exciting area for anyone who is really interested in the frontiers of machine learning – giving us real-world problems where the solutions are ones that are societally important and which potentially impact on us all. Think Covid 19! In this talk I will show how machine learning is transforming medicine and how medicine is driving new advances in machine learning, including new methodologies in automated machine learning, interpretable and explainable machine learning, dynamic forecasting, and causal inference.

Revolution in High Performance Computing to accelerate scientific progress by Mateo Valero

Invited Talks

Avispado: A RISC-V core supporting the RISC-V vector instruction set by Roger Espasa

In this talk, SemiDynamics will discuss its family of high-bandwidth RISC-V application cores, targeted at application domains such as Machine Learning, Recommendation Systems, Sparse Computation, HPC and Key-Value Store. We will describe the open vector interface that allows connecting a risc-v vector unit to SemiDynamics cores.

Building an Open Source Ecosystem for HPC by John Davis

While Moore´s Law is not dead, it is slowing and becoming more difficult to sustain. New fabs are becoming cost prohibitive to build, stagnating the move to the next technology node and traditional CMOS scaling approaches have come to an end. In this new technology environment, some of the rules have changed. This has produced a shift from abundant transistors to efficient use of transistors. Thus, to truly meet the HPC power and performance (FLOPS/W) requirements, we must specialize the hardware, using co-design of the full stack, all layers of hardware and software. This level of integration is not possible in a closed or even partially open ecosystem. Openness is required to tailor your hardware platform to the applications, thereby achieving the desired performance in the power constrained environment. Mirroring a similar model as Linux, RISC-V has followed a similar development path and has enjoyed significant industrial and academic adoption. Like Linux before it, the RISC-V ecosystem is in the nascent period where it can become the de facto open hardware platform of the future. The RISC-V ecosystem has the same opportunity in hardware that Linux created as a foundation for open source software. This enables the co-design of the RISC-V hardware and the entire software stack, creating a better overall solution than the closed hardware approach that is done today. In this talk, we will look at the ingredients required to create this open HPC ecosystem of the future.

Challenges and Solutions applying HPC to Interdisciplinary Research by Sunita Chandrasekaran

Legacy scientific code written years ago cannot run effectively on today's computing systems without updating the code to be adaptable to the modern compute resources. Such an effort requires intense collaborations between domain and computer scientists that often leads to either restructuring the code to make it suitable for the current platform. This should not be misconstrued as merely "engineering" but such collaborations typically lead to further scientific advancements that may not have been possible without such interdisciplinary research. This talk will focus on challenges and potential solutions with interdisciplinary research highlighting some of the best practices learnt while working with solar, bio, plasma and nuclear physicists.

Computer Number Formats to Accelerate Deep Neural Network Training by Marc Casas

The use of Deep Neural Networks (DNNs) is becoming ubiquitous in areas like computer vision, language translation, or scientific computing. DNNs display remarkable pattern detection capabilities. In particular, Convolutional Neural Networks (CNNs) are able to accurately detect and classify objects over large image data and Recurrent Neural Networks (RNNs) using encoder-decoder models are capable of solving tasks like Neural Machine Translation (NMT). In this context, several techniques successfully mitigate training costs by replacing the standard floating-point 32-bit (FP32) arithmetic with alternative approaches that employ non-standard low precision data representation formats, reducing memory storage, bandwidth requirements, and compute costs. Indeed, hardware vendors have incorporated half-precision data formats and have implemented mixed-precision (MP) instructions, which aim at reducing memory bandwidth and storage consumption. This talk will discuss the advantages and limitations of these proposals and will describe an approach to adapt dynamically floating-point arithmetic, which makes it possible to use full half-precision arithmetic for up to 96.4% of the time when training state-of-the-art neural networks, while delivering very similar accuracy as the 32-bit floating-point arithmetic.

Deep Learning Hardware: Past, Present, and Future by Bill Dally

The current resurgence of artificial intelligence is due to advances in deep learning. Systems based on deep learning now exceed human capability in speech recognition, object classification, and playing games like Go. Deep learning has been enabled by powerful, efficient computing hardware. The algorithms used have been around since the 1980s, but it has only been in the last decade - when powerful GPUs became available to train networks - that the technology has become practical. Advances in DL are now gated by hardware performance. This talk will review the current state of deep learning hardware and explore a number of directions to continue performance scaling in the absence of Moore’s Law.. Topics discussed will include number representation, sparsity, memory organization, optimized circuits, and analog computation.

EuroHPC - The European High Performance Computing Joint Undertaking by Josephine Wood

HPC Empowered Accessible, Affordable, and Effective AI Computing by Min Li

From the viewpoint as an industrial player, in this talk we would like discuss with you about the followings: (1) The inspiring challenges for future AI computing; (2) How the technology stack should be prepared for future accessible, affordable, and effective AI computing systems; (3) examples of industrial and academic projects that we empowered in the AI computing sector.

Programming parallel codes with PyCOMPSs by Rosa M. Badía

PyCOMPSs is a parallel task-based programming in Python. Based on simple annotations, sequential Python programs can be executed in parallel in HPC-clusters and other distributed infrastructures. Additionally to the traditional directionality clauses that enable the runtime to define the data-dependencies between tasks, PyCOMPSs has been recently extended to support tasks' resource constraints, tasks' versions, and multi-threaded and multi-node tasks. The talk will present the PyCOMPSs programming and runtime basics and its recent extensions for edge-to-cloud environments.

Using HPC to run AI training for spotting illegal buildings. How to develop real use case for real customers by Fabio Previtali

Women in Computing in Europe by Ruth Lennon