
Acerca de
Add-On Workshop: Scientific Machine Learning
February 27, 2025
Add-on workshops will be available as part of the 18th annual Energy HPC Conference. Each workshop will take place at Rice University's BRC on Thursday, February 27, 2025 - they will occur simultaneously, so only one workshop can be chosen per registration.
​
Scientific Machine Learning
Auditorium | 8:30 am - 3:00 pm
​
Organizers:
-
Beatrice Riviere (Rice University)
-
Matthias Heinkenschloss (Rice University)
​​
Schedule
-
8:30 - 9:00 am: Check-in + Breakfast
-
9:00 - 10:00 am: Charbel Farhat (Stanford)
-
10:00 - 11:00 am: Jonas Actor (Sandia National Lab)
-
11:00 - 11:20 am: Adrian Celaya (Rice University)
-
11:20 - 11:40 am: Jonathan Cangelosi (Rice University)
-
11:40 am - 1:00 pm: Lunch
-
1:00 - 2:00 pm: Elizabeth Qian (Georgia Tech)
-
2:00 - 3:00 pm: Benjamin Peherstorfer (NYU)
Speaker: Charbel Farhat (Stanford)
Session: Mechanics-Informed Machine Learning for the Discovery of Constitutive Models
Abstract: With the rise of machine learning (ML), deep artificial neural networks (ANNs) have emerged as powerful tools for data-driven constitutive modeling in computational mechanics, particularly in the realm of numerical homogenization of heterogeneous materials. However, traditional ANNs come with inherent limitations in this context. They are primarily designed to map input data to output data without integrating fundamental constraints, which can often lead to violations of physical laws during physics-based numerical simulations. This undermines confidence in the predictions generated by these models. To address this challenge, this lecture will introduce a reliable ML framework for the data-driven discovery of constitutive models tailored for heterogeneous materials, which is deeply informed by mechanics principles. This innovative framework imposes a comprehensive set of desirable mathematical properties on the architecture of the ANN, ensuring compliance with a wide array of physical and mechanical constraints. These constraints include dynamic stability, material stability, internal variable stability, objectivity, consistency, fading memory, recovery of elasticity, adherence to the second law of thermodynamics, and non-inversion of materials. The lecture will demonstrate how incorporating these principles within a learning framework enhances a model’s resilience to noise and improves its robustness to inputs that lie outside the training domain. Additionally, it will emphasize the advantages of this trustworthy ML framework in various engineering applications, such as predicting the supersonic inflation dynamics of a parachute system constructed from woven fabric for landing Perseverance on Mars.
Speaker: Jonas Actor (Sandia National Lab)
Session: Leveraging Approximation Theory for Efficient Scientific Machine Learning
Abstract: While machine learning methods are playing an increasingly prominent role in accomplishing scientific tasks, they still lack practical theory to guide decisions about their architecture, parameterization, and hyperparameter selection. In this talk, I show two different strategies for how to leverage approximation theory to fill this gap. First, I pose a machine learning scheme based on radial basis function approximation that admits closed-form integrals and moment computations, which allows weak forms of equations to be assembled analytically as part of a neural network’s architecture; the resulting network has approximation guarantees due to the underlying radial basis functions. Second, I analyze conventional deep networks through the lens of the composition of spline basis functions; in this light, multilayer perceptrons become a geometrically reduced version of Kolmogorov-Arnold Networks. This insight allows a natural
method for geometric refinement of neural networks, providing approximation properties that converge with the spatial resolution of the splines. For both approaches, I demonstrate the highlighted methods with examples that highlight their approximation capabilities for scientific tasks.
Speaker: Adrian Celaya (Rice University)
Session: Learning Finite Difference and Discontinuous Galerkin Solutions to Elliptic Problems via Numerics-Informed Neural Networks
Abstract: In recent years, there has been an increasing interest in using deep learning and neural networks to tackle scientific problems, particularly in solving partial differential equations (PDEs). However, many neural network-based methods, such as physics-informed neural networks, depend on automatic differentiation and the sampling of collocation points, which can result in a lack of interpretability and lower accuracy compared to traditional numerical methods. Numerics-informed neural networks (NINNs) address this issue by learning discretized solutions to PDEs, resulting in more interpretable solutions. We propose two NINNs for learning numerical solutions to elliptic PDEs. The first approach learns finite difference (FD) solutions, and the second learns discontinuous Galerkin (DG) solutions. In both cases, we see that our proposed approaches accurately recover the FD and DG solutions.
Speaker: Jonathan Cangelosi (Rice University)
Session: Sensitivity-Driven Surrogate Modeling For Trajectory Optimization
Abstract: In many applications, one wants to compute an optimal trajectory for a dynamical system where certain parts of the dynamics depend on computationally expensive quantities. Solving an optimal control problem requires repeated computation of these quantities and their derivatives at many states, leading to prohibitively high computational costs. By performing expensive high-fidelity computations at a small number of states, one may obtain sufficient data to construct inexpensive surrogate models of these quantities for use in the optimization algorithm; however, this introduces inexactness into the problem, which can lead to poor solution quality if the surrogates are not sufficiently accurate or the problem is sensitive to surrogate errors. In the scenario that additional high-fidelity computations may be performed as needed to improve the surrogates, one must determine at which states to perform these computations. In this talk, I present a novel adaptive sampling approach that leverages sensitivity information from the optimal control problem and pointwise error bounds for surrogates, giving a method to both assess and improve solution quality with minimal high-fidelity computations. I then provide a numerical example of a trajectory optimization problem for a notional hypersonic vehicle with lift, drag, and moment functions that must be approximated. The results show that my sensitivity-driven approach selects the best states for high-fidelity computations when the surrogates are under-resolved, reducing the number of high- fidelity computations required to obtain a good solution.
Speaker: Elizabeth Qian (Georgia Tech)
Session: Multifidelity Linear Regression for Scientific Machine Learning from Scarce Data
Abstract: Machine learning (ML) methods have garnered significant interest as potential methods for learning surrogate models for complex engineering systems for which traditional simulation is expensive. However, in many scientific and engineering settings, training data are scarce due to the cost of generating data from traditional high-fidelity simulations. ML models trained on scarce data have high variance and are sensitive to vagaries of the training data set. We propose a new multifidelity training approach for scientific machine learning that exploits the scientific context where data of varying fidelities and costs are available; for example high-fidelity data may be generated by an expensive fully resolved physics simulation whereas lower-fidelity data may arise from a cheaper model based on simplifying assumptions. We use the multifidelity data to define new multifidelity Monte Carlo estimators for the unknown parameters of linear regression models, and provide theoretical analyses that guarantee accuracy and improved robustness to small training budgets. Numerical results show that multifidelity learned models achieve order-of-magnitude lower expected error than standard
training approaches when high-fidelity data are scarce.
Speaker: Benjamin Peherstorfer (NYU)
Session: Leveraging Nonlinear Latent Dynamics for Numerically Forecasting High-Dimensional Systems
Abstract: Many high-dimensional and seemingly intractable problems in science and engineering have well-behaved latent dynamics that offer a path towards their numerical solution. In this talk, we will demonstrate that nonlinear and data-driven approximations can help leverage latent dynamics that are out of reach of more traditional computational methods. First, we will present Neural Galerkin schemes that overcome the Kolmogorov barrier via nonlinear latent representations and active sampling, enabling rapid predictions of transport-dominated phenomena that are inaccessible to traditional, linear model reduction methods. Second, we will present a variational approach for learning reduced models of systems that feature stochastic and mean-field effects. The approach infers parameter- and time-dependent gradient fields to efficiently generate sample trajectories that approximate the system's population dynamics over varying physics parameters. Along the way, we will report numerical experiments that showcase how leveraging latent dynamics enables solving science and engineering applications, from modeling rotating detonation waves that are of interest in space propulsion to predicting Vlasov-Poisson instabilities to forecasting high-dimensional chaotic systems.
​​