Numerical Methods for Partial Differential Equations : from Finite Element Method to Neural Networks
In the vast majority of theoretical and real-world applications, the solution to a partial differential equation can not be computed analytically. The aim of this course is to explore two methods allowing to compute approximate solutions. The first, ”traditional” one, is the Finite Element Method. The second one makes use of the recent developments in Deep Learning leading to the so called ”Physics Informed Neural Networks”. In a first part, we will study the fundamental tools needed to solves elliptic partial differential equations, such as weak derivatives and Sobolev spaces. We will show how to approximate PDEs using basic finite elements tools. In the second part, we present the definition of au Neural Network and derive the proof of the so-called "Universal Approximation Theorem". Using backpropagation, we show how a network can be trained to solve some partial differential equations.
This course will be given as an Arbeitsgemeinschaft (working group) in the University of Würzburg during the summer semester of 2024. This page will provide all the necessary resources for the students and more!
Overview of the course
1. Finite Element Method
1.1. Introduction to the variational formulation
Consider the prototypal Poisson equation \begin{equation} \begin{cases} - \Delta u = f &\mbox{ in } \Omega \\ u = 0 & \mbox{ on } \partial \Omega \end{cases} \end{equation} on a domain $\Omega$. Using Stokes formula (that we will recall), we can show that $u \in C^2$ is a solution of this equation if and only if it verifies $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega fv $$ for all $v\in C^1$ such that $v=0$ in $\partial \Omega$. This motivates the introduction of the Lax-Milgram theorem, an abstract framework to prove the existence and uniqueness of solutions of such problems.
However, the theorem can not be applied in $C^n$ function spaces since it is not complete for the natural scalar product, making necessary to generalize the meaning of the Poisson equation to the so-called
1.2. A short introduction to Sobolev spaces: the case of $H^1$
After a quick reminder of the $L^2$ space, we introduce the notion of
We then proceed to study canonical examples, such as the Poisson equation with various boundary conditions and other
1.3. The Finite Element Method
Solving a PDE algorithmically requires to approximate it by a problem of finite dimension. The most elementary way to do so may be the finite differences method, which amounts at discretizing the domain $\Omega$. The Finite Element Method consists in another approach, namely discretizing the Sobolev space $H^1(\Omega)$ (which, in turn, need to mesh $\Omega$). We will detail the principle of internal approximation of a variationnal problem. Then we proceed to explain the principles of the FEM, first for $P_1$ elements in $1$D, writing explicitly the mass and rigidity matrix corresponding to the Poisson problem. We then generalize to $n$D, with triangular meshes.
At last, we will use the theoretical knowledge of the previous sections to numerically approximate otherwise unsolvable problems. We will use FreeFEM++, a language and framework which allows to solve a variationnal formulation in an easy an transparent way.
2. Neural Networks for PDEs
2.1. A brief introduction to Neural Networks
We provide a brief overview of Deep Learning and its uses (supervised, unsupervised, reinforcement, generative, regression, classification). We define the notion of a dense neural network and state and prove the Universal Approximation Theorem, which state that every function can be approximated by a NN.
Introducing the concept of loss function, gradient descent and backpropagation, the training of a NN is then tackled.
2.2. Using Neural Networks as PDE solvers
Through FEM, we have seen a precise and elegant way to approximate the solution of a PDE. Using approximation capabilities of NNs, we see how to solve complex PDEs using little to no data. We will discuss the recent developpements in the field, such as :
- Physics Informed Neural Networks (PINNs) [1]
- Deep Ritz Method (DRM) [2]
- Variational PINNs (vPINNs) [3]
- Weak Adversarial Neural Networks (wANN) [4]
- Deep Operator Networks (DeepONets) [5]
Using PyTorch, we will see how fast and easy it is to solve classical PDEs on an example.
Project
The course will end with a project, by groups of 2-3 students. While the subject is pretty free, the students will have to implement at least one simulation using FEM or NN. It will be possible to treat the chosen subject from a more theoretical or numerical point of view, depending on the taste of each student.
While the subject is free, I give here some ideas of possible projects (some of which may be difficult...):
- Classical time-dependant equations (heat, wave, Schrödinger...)
- Some non-linear problems (steady Navier-Stokes equation...)
- Comparing the efficiency of FEM and NNs on simple examples
- Parametric optimization (optimization of elastic structures, optimal profile of an obstacle in a Stokes flow...)
- Eigenvalue simulation (can we hear the shape of a drum, eigenstates of the Shrödinger operator, vibrating structures...)
- Reaction-diffusion systems (Gray-Scott model, Allen-Cahn equation...)
Examples of previous projects :
Companion books/notes:
-
Numerical Analysis and Optimization
,
G. ALLAIRE. -
Real and Complex Analysis,
W. RUDIN. -
An Overview Of Artificial Neural Networks for Mathematicians,
L. FERREIRA GUILHOTO.
References :
-
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
(2019),
RAISSI, PERDIKARIS and KARNIADAKIS,
Journal of Computational Physics. -
The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems
(2018),
WEINANG E and BING YU,
Communications in Mathematics and Statistics. -
Variational Physics-Informed Neural Networks For Solving Partial Differential Equations
(2019),
KHARAZMI, ZHANG and KARNIADAKIS,
ArXiv. -
Weak Adversarial Networks for High-dimensional Partial Differential Equations
(2019),
ZANG, BAO, YE and ZHOU,
Journal of Computational Physics. -
DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators
(2020),
LU, JIN and KARNIADAKIS,
Nature Machine Intelligence.