Kalendarium
29
April
PhD seminar: Rahul Manavalan
Title
Solving and learning PDEs using Gaussian processes
Abstract
In the current era of AI, machine learning approaches for solving partial differential equations (PDEs) have gained renewed interest. A significant portion of this research focuses on neural network-based methods, notably the Physics-Informed Neural Networks (PINNs) paradigm. Despite their theoretical appeal, PINNs often require meticulous tuning during training and lack straightforward convergence guarantees in the finite-width regime. Additionally, regularizing PINNs to control function smoothness presents challenges, as Sobolev optimization can be unstable.
Gaussian Processes (GPs) offer a promising alternative, providing well-posedness, natural regularization, and theoretical convergence assurances. However, their application in computational science has been limited due to the cubic computational complexity associated with dense kernel matrices. In this talk, I will discuss a method developed by Chen and Owhadi that addresses these computational challenges, enabling GP solvers to efficiently handle PDEs. Their framework transforms the problem into a tractable optimization that leverages sparse Cholesky factorization to achieve near-log-linear complexity for linear elliptic PDEs and at least quadratic complexity for nonlinear problems. I will also present empirical results showing that the complexity constants are often surprisingly small in practice.
As this is preliminary work, the presentation will focus on elucidating the core concepts, abstracting the method into modular components, and exploring potential extensions. The session will conclude with a discussion on open questions and prospective applications in computational science.
Reference
Chen, Y., Hosseini, B., Owhadi, H., & Stuart, A. M. (2021). Solving and learning nonlinear PDEs with Gaussian processes. Journal of Computational Physics, 447, 110668. https://doi.org/10.1016/j.jcp.2021.110668