Abstract
Increasing computing power and improvements in sensing technology mean that we have an unprecedented ability to simulate and collect data from complex fluid flows. In order to leverage this data to solve real engineering problems, my research develops rigorous scientific machine learning algorithms and theory explaining their capabilities and fundamental limitations in fluid dynamics applications. The first main thrust of this work is on developing structured operator learning methods to provide efficient surrogate models and preconditioners for costly linear and nonlinear solvers. The second thrust, which is the primary focus of this talk, is on data-driven reduced-order modeling.
A reduced-order model (ROM) is a simplified approximation of a high-dimensional dynamical system, such as a fluid flow, which can be used for qualitative analysis, real-time forecasting, state estimation, and control. Shear-dominated fluid flows can be especially difficult to model using data-driven techniques such as proper orthogonal decomposition (a.k.a., principal component analysis), kernel-based manifold learning, and autoencoders because these methods discard low-variance variables, neglecting their importance for future dynamics. We show that this is a fundamental limitation related to the curse of dimensionality, and that additional information is needed to capture these sensitivity mechanisms. To extract reliable coordinates for forecasting, we introduce an efficient algorithm called Covariance Balancing Reduction using Adjoint Snapshots (CoBRAS). This method relies on state and randomized gradient data obtained by solving linearized adjoint equations to construct an oblique projection balancing the effects of state variance and the sensitivity of future outputs to the truncated degrees of freedom. To refine an initial linear projection, such as that from CoBRAS, we introduce Trajectory-based Optimization for Oblique Projections (TrOOP) --- a gradient descent method that minimizes forecasting error on trajectory data. We also develop a kernel-based extension of CoBRAS to extract powerful nonlinear coordinates and an autoencoder architecture capable of learning nonlinear projections condensing states along curved fibers onto learned low-dimensional manifolds. We demonstrate these techniques and the limitations of standard methods on a nonlinear axisymmetric jet flow simulation with 100,000 state variables.
Future work will investigate the role of the adjoint, which is often impossible to access in experimental settings. By leveraging known structure, we aim to introduce operator learning and model reduction methods that can be reliably trained without adjoints, and can be rapidly transferred to new spatial domains other than those seen during training. A major goal is to develop operator learning methods capable of serving as preconditioners to accelerate numerical simulations.
About the Speaker
Samuel E. Otto is a postoctoral scholar at the AI Institute in Dynamic Systems at the University of Washington. His research leverages tools from applied analysis and scientific machine learning to tackle challenging engineering problems related to modeling high-dimensional systems such as fluid flows. He received his Ph.D in mechanical and aerospace engineering from Princeton University in May 2022, advised by Clarence W. Rowley, with a dissertation titled “Advances in Data-Driven Modeling and Sensing for High-Dimensional Nonlinear Systems”. Prior to this, he earned a B.S. in aeronautics and astronautics from Purdue University with a minor in mathematics. He has continued to avidly self-teach mathematics and is a member of the APS and SIAM. In his spare time he enjoys weightlifting and can also be found visiting the bear cubs at the Seattle Zoo.