Numerical Methods for Partial Differential Equations : from Finite Element Method to Neural Networks

Fluid simulation
Simulation of a fluid around a sphere using Fenics.

In the vast majority of theoretical and real-world applications, the solution to a partial differential equation can not be computed analytically. The aim of this course is to explore two methods allowing to compute approximate solutions. The first, ”traditional” one, is the Finite Element Method. The second one makes use of the recent developments in Deep Learning leading to the so called ”Physics Informed Neural Networks”. In a first part, we will study the fundamental tools needed to solves elliptic partial differential equations, such as weak derivatives and Sobolev spaces. We will show how to approximate PDEs using basic finite elements tools. In the second part, we present the definition of au Neural Network and derive the proof of the so-called "Universal Approximation Theorem". Using backpropagation, we show how a network can be trained to solve some partial differential equations.

This course will be given as an Arbeitsgemeinschaft (working group) in the University of Würzburg during the summer semester of 2024. This page will provide all the necessary resources for the students and more!

Overview of the course

1. Finite Element Method

1.1. Introduction to the variational formulation

Consider the prototypal Poisson equation \begin{equation} \begin{cases} - \Delta u = f &\mbox{ in } \Omega \\ u = 0 & \mbox{ on } \partial \Omega \end{cases} \end{equation} on a domain $\Omega$. Using Stokes formula (that we will recall), we can show that $u \in C^2$ is a solution of this equation if and only if it verifies $$ \int_\Omega \nabla u \cdot \nabla v = \int_\Omega fv $$ for all $v\in C^1$ such that $v=0$ in $\partial \Omega$. This motivates the introduction of the Lax-Milgram theorem, an abstract framework to prove the existence and uniqueness of solutions of such problems.

However, the theorem can not be applied in $C^n$ function spaces since it is not complete for the natural scalar product, making necessary to generalize the meaning of the Poisson equation to the so-called Sobolev spaces.

1.2. A short introduction to Sobolev spaces: the case of $H^1$

After a quick reminder of the $L^2$ space, we introduce the notion of weak derivatives. This allows to define the Sobolev space $H^1$ of square-integrable weakly differentiable functions, which can be proved to be an Hilbert space. While the structure of Hilbert space is amazingly useful, the functions of this space might not even be continous anymore. This will add further difficulties, like defining the notion of trace of an $H^1$ function on $\partial \Omega$ to make sense of the boundary conditions.

We then proceed to study canonical examples, such as the Poisson equation with various boundary conditions and other elliptic PDEs. We will show that often, the solutions to such problems can be seen as the minimizers of a certain energy.

1.3. The Finite Element Method

Solving a PDE algorithmically requires to approximate it by a problem of finite dimension. The most elementary way to do so may be the finite differences method, which amounts at discretizing the domain $\Omega$. The Finite Element Method consists in another approach, namely discretizing the Sobolev space $H^1(\Omega)$ (which, in turn, need to mesh $\Omega$). We will detail the principle of internal approximation of a variationnal problem. Then we proceed to explain the principles of the FEM, first for $P_1$ elements in $1$D, writing explicitly the mass and rigidity matrix corresponding to the Poisson problem. We then generalize to $n$D, with triangular meshes.

At last, we will use the theoretical knowledge of the previous sections to numerically approximate otherwise unsolvable problems. We will use FreeFEM++, a language and framework which allows to solve a variationnal formulation in an easy an transparent way.

2. Neural Networks for PDEs

Neural Network
The mandatory eye-catching representation of a Neural Network you have to put on every serious article on Machine Learning.
2.1. A brief introduction to Neural Networks

We provide a brief overview of Deep Learning and its uses (supervised, unsupervised, reinforcement, generative, regression, classification). We define the notion of a dense neural network and state and prove the Universal Approximation Theorem, which state that every function can be approximated by a NN.

Introducing the concept of loss function, gradient descent and backpropagation, the training of a NN is then tackled.

2.2. Using Neural Networks as PDE solvers

Through FEM, we have seen a precise and elegant way to approximate the solution of a PDE. Using approximation capabilities of NNs, we see how to solve complex PDEs using little to no data. We will discuss the recent developpements in the field, such as :

  • Physics Informed Neural Networks (PINNs) [1]
  • Deep Ritz Method (DRM) [2]
  • Variational PINNs (vPINNs) [3]
  • Weak Adversarial Neural Networks (wANN) [4]
  • Deep Operator Networks (DeepONets) [5]
among others.

Using PyTorch, we will see how fast and easy it is to solve classical PDEs on an example.

Project

The course will end with a project, by groups of 2-3 students. While the subject is pretty free, the students will have to implement at least one simulation using FEM or NN. It will be possible to treat the chosen subject from a more theoretical or numerical point of view, depending on the taste of each student.

While the subject is free, I give here some ideas of possible projects (some of which may be difficult...):

Examples of previous projects :

  • Testing Physics-Informed Neural Networks for the Solution of Hyperbolic Conservation Laws, Leon Jacobi and Simon Krotsch (report) (code)

Cantilever optimization
Topology optimization of a cantilever using the SIMP method (source).