Home United States USA — software Research team accelerates multi-physics simulations with El Capitan predecessor systems

Research team accelerates multi-physics simulations with El Capitan predecessor systems

64
0
SHARE

Researchers at Lawrence Livermore National Laboratory (LLNL) have achieved a milestone in accelerating and adding features to complex multi-physics simulations run on Graphics Processing Units (GPUs), a development that could .
Researchers at Lawrence Livermore National Laboratory (LLNL) have achieved a milestone in accelerating and adding features to complex multi-physics simulations run on Graphics Processing Units (GPUs), a development that could advance high-performance computing and engineering.
As LLNL readies for El Capitan, the National Nuclear Security Administration’s first exascale supercomputer, the team’s efforts have centered around the development of MARBL, a next-generation multi-physics code, for GPUs. El Capitan is based on AMD’s cutting-edge MI300A Accelerated Processing Units (APUs), which combines Central Processing Units (CPUs) with GPUs and high-bandwidth memory into a single package, allowing for more efficient resource sharing.
El Capitan’s heterogeneous (CPU/GPU) computing architecture, along with expectations that most future supercomputers will be heterogeneous, made it imperative that multi-physics codes like MARBL—which targets mission-relevant high-energy-density (HED) physics like those involved in inertial confinement fusion (ICF) experiments and stockpile stewardship applications—could perform efficiently across a wide variety of architectures, researchers said.
In a recent paper published by the Journal of Fluids Engineering, by harnessing the power of GPUs, specifically AMD’s MI250X GPUs in El Capitan’s early access machines, the researchers successfully extended MARBL’s capabilities to include additional physics crucial for HED physics and fusion modeling.
« The big focus of this paper was supporting multi-physics—specifically multi-group radiation diffusion and thermonuclear burn, which are involved in fusion reactions—and the coupling of all of that with the higher-order finite-element moving mesh for simulating fluid motion, » principal investigator Rob Rieben said.
« To get performance on the GPU, there is a lot you have to do in terms of programming, optimizing kernels, balancing memory, and turning your code into a GPU-parallel code, and we were able to accomplish that. »
Rieben’s team has been dedicated to engineering the scalable, GPU-accelerated multi-physics application MARBL for simulating HED physics experimental platforms since 2015, focusing on the simultaneous advancement of software abstractions and algorithmic developments to enable GPU performance.
The work described in the recent paper is essential for delivering on programmatic tasks that rely heavily on large-scale computational science to answer tough national security questions, said co-author Alejandro Campos, who added that the team faced two main challenges in extending MARBL’s capabilities: verifying that additional physics modules were accurately implemented and ensuring that those new modules could perform efficiently when running on the next generation of GPU-based machines.
Researchers said the team addressed those challenges through techniques such as new algorithms for solving linear systems with preconditioners, which have historically been optimized for CPUs. A breakthrough from LLNL’s Center for Applied Scientific Computing (CASC) led to a new type of preconditioner suited for GPUs, which was integrated into the code and scaled up for production use.
Preconditioners for linear solvers have been challenging to port to GPUs in a performant way, Rieben said. « CASC proposed a new type of preconditioner needed for solving diffusion equations that is specifically designed to provide high performance for high-order methods on GPUs, which enable us to run large 3D multi-physics simulations on GPU machines like El Capitan.

Continue reading...