Lecture Descriptions
Applied Impact of AI in Science and Engineering by Mohaned Wahib
This talk explores the transformative impact of artificial intelligence (AI) in science and engineering. We will highlight recent advances where AI techniques have accelerated discovery, improved predictions, and enabled new capabilities across diverse fields, including physics, materials science, and engineering design. Through real-world examples and case studies, we will discuss both the opportunities and challenges in applying AI to scientific research and engineering practice. The talk aims to provide insights into how AI is reshaping traditional workflows and driving innovation at the intersection of data and discovery.
Diagnostic studies of High-Resolution Global Climate Model by Yoshiyuki Kajikawa
The development of HPC has brought higher resolution and more elaboration in the climate model. We are now entering the era of “Kilometre-scale” modeling of the climate system. With these developments, it would be natural that the climate model analysis should be also advanced. In this lecture, the diagnostic studies of high-resolution climate model simulations with the benefit of high-resolution will be introduced with the pioneering recent studies. In particular, the expression of convection in the regional and global climate model as well as its aggregation process will be focused. In the latter half of the lecture, we will also introduce how the reproducibility of the climate fields and elements are improved in various spatio-temporal scales by resolving the cumulus convection. We would like to share and discuss the direction of renewed climate science with high-resolution climate model simulations.
Earth Observation Data Analysis Using Machine Learning by Naoto Yokoya
This lecture focuses on applying machine learning techniques with earth observation data analysis. We start by exploring different applications of Earth observation data, such as environmental monitoring, disaster management, and urban planning. After an introduction to machine learning, we delve into the basics of neural networks and how they can be applied to remote sensing data. Participants will take part in a hands-on session where they will build a neural network model for building damage classification. They will then explore automated mapping techniques using remote sensing imagery, with a particular focus on semantic segmentation for land cover mapping. Another hands-on session will guide participants through practical implementation steps for land cover mapping using machine learning algorithms.
Introduction to Fugaku by Jorji Nonaka
This session will provide an overview of the hardware and software resources of the supercomputer Fugaku available to the users, as well as a hands-on introduction to their use via traditional CLI (Command Line Interface) terminal as well as Web-based GUI (Graphical User Interface) via Fugaku Open OnDemand service. The overview also includes information resources to the user guides, operational status, and user support.
Introduction to HPC Applications and systems by Bernd Mohr
In this introductory lecture, students will learn what "high performance computing" (HPC) means and what differentiates it from more mainstream areas of computing. They will also be introduced to the major application areas that use HPC for research and industry, and how AI and HPC interact with each other. Then, it will present the major HPC system architectures needed to run these applications (distributed and shared memory, hybrid, and heterogeneous systems).
Introduction to HPC Programming by Bernd Mohr
In this second introductionary lecture, students will be provided with an overview of the programming languages, frameworks, and paradigms used to program HPC applications and systems. They will learn how MPI can be used to program distributed memory systems (clusters), how OpenMP can be used for shared memory systems, and finally, how to program graphics processing units (GPUs) with OpenMP, OpenACC, or lowel-level methods like CUDA or ROCm/HIP.
Mapping Irregular Computations to Accelerator-Based Exascale Systems by Kathy Yelick
As traditional technology drivers of computing performance level off, the use of accelerators with various levels of specialization are growing in importance. At the same time, data movement continues to dominate running time and energy costs, making communication cost reduction the primary optimization criteria for compilers and programmers. This requires new ways of thinking about algorithms to minimize and hide communication, expose fine-grained parallelism, and manage communication. These changes will affect the theoretical models of computing, the analysis of performance, the design of algorithms, and the practice of programming.
In this talk I will discuss prior work and open problems in optimizing communication, avoiding synchronization, and tolerating nondeterminism, using data analysis and machine learning problems from biology as driving examples. I will discuss distributed data structures and communication optimizations in large-scale genome analysis, including metagenome assembly, protein clustering, and more. The algorithms represented data analysis “motifs” including hashing, alignment, generalized n-body, and sparse matrices. I will give an overview of the parallelization approaches and highlight some of the resulting scientific insights.
Molecular dynamics simulations of biomolecular systems using GENESIS by Chigusa Kobayashi
Advances in experimental techniques have increasingly clarified the link between biomolecular structures and their biological functions, but obtaining detailed information on reaction pathways and transient intermediates remains challenging. Molecular dynamics (MD) simulations, which solve Newton’s equations of motion to compute atomic trajectories, offer a powerful approach to address this issue. To further advance MD methodology, the Generalized Ensemble Simulation System (GENESIS) has been designed as highly efficient and accurate MD software by Dr. Yuji Sugita (RIKEN) and his collaborators. GENESIS achieves excellent scalability on massively parallel supercomputers, including Fugaku, and is widely used by researchers worldwide under the LGPL license.
In this tutorial, the first half will offer a lecture-style overview of MD simulations, the algorithms implemented in GENESIS, and the types of calculations that can be performed with the software. The second half will consist of hands-on exercises that guide participants through the practical use of GENESIS - from system equilibration to analyzing the results - following a realistic research workflow.
Parallel Programming with MPI and OpenMP Hands-on by Jens Domke and Bernd Mohr
In this lecture, students will be provided with all the necessary details, to perform parallel programming exercises with MPI and OpenMP on the Fugaku supercomputer of RIKEN, Japan, one of the fastest computers in the world. Ideally, students should have some basic experience with programming computers with C, C++, FORTRAN or Python. Experiences with the Linux operation system are also helpful, but not required.
Smart Disaster Management: Automating Urban Models for Hazard-Based Damage Estimation by Satoru Oishi
In recent years, natural disasters have been occurring frequently around the world. It is difficult to predict or prevent natural hazards. However, since the vulnerability of human life to hazards causes disaster damage, it is equally important to predict the consequences of such events as it is to predict when a disaster will occur. We refer to this as damage estimation based on hazard scenarios. In this lecture, we will teach the methods of damage estimation.
For damage estimation, both geoscientific information, such as terrain data, and hazard scenarios, like seismic wave spectrum, and human-related information, such as buildings and roads, are required. These are not stored in a single database, and their formats often differ. For example, geochemical information is recorded in the form of cells, coordinates, or grid points, while information related to human life is described using addresses or polygons. Searching for and merging these data appropriately and inputting them into a single damage estimation program has traditionally been done manually by construction consultants. However, with the advancement of social digital twins, automation is now being sought. In this lecture, we will teach the automation of urban models and damage estimation methods necessary for such simulations.
Solving 3D puzzles of biomolecular interactions by physics- and AI-based integrative modelling by A.M.J.J. Bonvin
Predicting the structure of biomolecular macromolecules and modelling their interactions and dynamics is of paramount importance for fundamental understanding of cellular processes and drug design. One way of increasing the accuracy of modelling methods used to predict the structure of biomolecular complexes is to include as much experimental or predictive information as possible in the process. We have developed for this purpose the versatile integrative modelling software HADDOCK (https://www.bonvinlab.org/software) available as a web service from https://wenmr.science.uu.nl. HADDOCK can integrate a large variety of information derived from biochemical, biophysical or bioinformatics methods to enhance sampling, scoring, or both.
The lecture will highlight some recent developments around HADDOCK and its new modular HADDOCK3 version and illustrate its capabilities with various examples including among others recent work on the modelling of antibody and nanobody-antigen complexes combining AI predictions with physics-based integrative modelling with HADDOCK.
The practical session will demonstrates the use of the new modular HADDOCK3 version for predicting the structure of an antibody-antigen complex using knowledge of the hypervariable loops on the antibody (i.e., the most basic knowledge) and epitope information identified from NMR experiments for the antigen to guide the docking.
What is an eigenvalue problem, and how to compute it? (Lecture in the Computational Materials Science track) by Toshiyuki Imamura
In the field of computational materials science, numerical linear algebra, especially eigenvalue decomposition (EVD), is a fundamental mathematical component. We will review the basics of eigenvalues and their calculation through short exercises using the power method, a core numerical algorithm, along with other basic matrix decomposition techniques. After exploring the theory of EVD, we will engage in large-scale EVD applications on Fugaku, utilizing EigenExa, a high-performance parallel eigenvalue solver developed by RIKEN R-CCS.