Urbana Campus Research Calendar (OVCRI)

View Full Calendar

NPRE 596 Graduate Seminar Series - Paul Fisher

Event Type
Seminar/Symposium
Sponsor
NPRE 596 Graduate Seminar Series
Location
2100 Sidney Lu Mechanical Engineering Building, 1206 W Green St, Urbana, IL 61801
Date
Sep 24, 2024   4:00 - 4:50 pm  
Speaker
Paul Fisher, Professor, Mechanical Science & Engineering, University of Illinois Urbana-Champaign
Cost
Free and Open to the Public
Contact
Department of Nuclear, Plasma & Radiological Engineering
E-Mail
nuclear@illinois.edu
Phone
217-333-2295
Views
31
Originating Calendar
NPRE seminars

 HPC and High-Order Methods for Thermal Hydraulics and MHD

Abstract: DOE’s recently concluded Exascale Computing Project has enabled high- fidelity thermal-hydraulics (TH) and coupled-system simulations at scales rang- ing from unit tests to full-core calculations. Our focus in this talk is on the development and application of Nek5000/RS, which is a highly scalable open- source spectral element code for turbulent fluid-thermal simulation, including incompressible and low-Mach number flows, combustion, and incompressible magnetohydrodynamics (MHD). Our approach tackles two key ingredients for this class of problems: (i) efficient discretizations and (ii) efficient and scalable implementations.

Regarding (i), large-scale simulations of turbulence imply multiscale interac- tions and require relatively long integration times to transport small-scale flow features of size l through a domain of size L » l. Such conditions place strin- gent requirements on numerical accuracy to prevent small-scale features from being distorted by numerical dispersion or dissipation. As noted by Kreiss and Oliger in 1972, high-order numerical methods are particularly efficient in this regime. We demonstrate that the benefits of high-order pertain not only to the primitive Navier-Stokes equations (NSE), but also to model equations such as used for large-eddy simulations (LES) and Reynolds-Averaged Navier-Stokes (RANS) formulations. Like the NSE, LES and RANS are based on advection- dominated PDEs where high-order yields efficiency through rapid convergence by requiring fewer grid points, n, than for its low-order counterpart.

Regarding (ii), we discuss design and performance issues for high-order methods that are particularly critical at the strong-scale limit of high-performance computers (i.e., where parallel efficiency begins to deviate from unity). For fixed problem size, n, communication effects and kernel launch times (on GPUs) be- come important with an increasing number of processors, P . We show that the performance limits are strongly tied to the local problem size, n/P , and only weakly dependent on n or P individually. We explore the consequences of this fact in performance analysis for DOE’s leadership class computers such as Summit, Polaris, Frontier, and Aurora, which represent some of the fastest computers in the world.

We illustrate the effectiveness of the overall approach with NE-related ap- plications throughout the talk, including recently-developed MHD capabilities in NekRS and trillion-point TH simulations on Frontier. 

Bio: Paul Fischer is a professor in Computer Science and in Mechanical Science and Engineering at the University of Illinois, Urbana Champaign, and is also a senior scientist at Argonne National Laboratory.  He holds degrees in mechanical engineering from Cornell (BS), Stanford (MS), and MIT (PhD) and held the inaugural Center for Research in Parallel Computation postdoctoral fellowship in applied mathematics at Caltech.  Fischer pioneered the development of spectral element methods for high-performance simulations of turbulence, including the development of Nekton 2.0, which was the first commercial software for distributed-memory parallel computers.  His research variant, Nek5000, is a prior Gordon Bell winner and has scaled to millions of ranks on Mira and Sequoia.  It, along with the new GPU-based variant, NekRS, is used by over 500 researchers in industry and academia and it is part of the NRC's licensing software suite.  From 2017 to 2023, Fischer was Deputy Director of the Center for Efficient Exascale Discretizations (CEED), supported by DOE's Exascale Computing Project.  Fischer's current research is focused on advanced preconditioners for GPU-based solutions of PDEs and reduced-order models for turbulent flows with applications to industrial problems.

link for robots only