Hildebrand Department of Petroleum and Geosystems Engineering
Department of Aerospace Engineering and Engineering Mechanics
Oden Institute for Computational Engineering and Science
The University of Texas at Austin
August 18, 2023
Introduced in (Raissi, Perdikaris, and Karniadakis 2019) as a general method for solving partial differential equations.
Already recieved >5900 citations since posting on arXiv in 2018!
\[ \begin{align} r_d(u_{NN}, f_{NN}) &= \mathcal{L} u_{NN} - f_{NN},\qquad x \in \Omega \\ r_e(u_{NN}) &= u_{NN} - h,\qquad x \in \partial \Omega_h \\ r_n(u_{NN}) &= \frac{\partial u_{NN}}{\partial x} - g,\qquad x \in \partial \Omega_g \\ r_{u}(u_{NN}) &= u_{NN}(x^u_i) - u_m(x^u_i), \qquad i=1,2,...,n \\ r_{f}(f_{NN}) &= f_{NN}(x^f_i) - f_m(x^f_i), \qquad i=1,2,...,m \end{align} \]
\[ \begin{gather} L_{MSE} = \frac{1}{N}\sum_{i}^N r_d^2 + \frac{1}{N_e}\sum_{i=1}^{N_e} r_e^2 + \frac{1}{N_n}\sum_{i=1}^{N_n} r_n^2 + \frac{1}{n}\sum_{i=1}^{n} r_{u}^2 + \frac{1}{m}\sum_{i=1}^m r_{f}^2 \end{gather} \]
\[ \begin{align} r_d(u_{NN}^j, f_{NN}^j) &= \mathcal{L} u_{NN}^j - f_{NN}^j,\qquad x \in \Omega \\ r_e(u_{NN}^j) &= u_{NN}^j - h,\qquad x \in \partial \Omega_h \\ r_n(u_{NN}^j) &= \frac{\partial u_{NN}^j}{\partial x} - g,\qquad x \in \partial \Omega_g \\ r_{um}(u_{NN}^j) &= u_{NN}^j(x^u_i) - \left(u_m(x^u_i) + \sigma_u^j\right), \qquad i=1,2,...,n \\ r_{fm}(f_{NN}^j) &= f_{NN}^j(x^f_i) - \left(f_m(x^f_i) + \sigma_f^j\right), \qquad i=1,2,...,m \ \end{align} \]
\[ \begin{gather} L_{MSE} = \frac{1}{M}\sum_{j=1}^{M} \left( \frac{1}{N}\sum_{i}^N r_d^2 + \frac{1}{N_e}\sum_{i=1}^{N_e} r_e^2 + \frac{1}{N_n}\sum_{i=1}^{N_n} r_n^2 + \frac{1}{n}\sum_{i=1}^{n} r_{um}^2 + \frac{1}{m}\sum_{i=1}^m r_{fm}^2 \right) \end{gather} \]
\[ \begin{gathered} \lambda \frac{\partial^2 u}{\partial x^2} = f, \qquad x \in [-0.7, 0.7] \end{gathered} \] where \(\lambda = 0.01\) and \(u=\sin^3(6x)\)


2 neural networks: \(u_{NN}\) and \(f_{NN}\)
2 hidden layers with 20 and 40 neurons each
\(\tanh\) activation function
ADAM optimizer
\(10^{-3}\) learning rate
Xavier normalization
10000 epochs
500 outputs
















\[ \begin{gathered} \lambda \frac{\partial^2 u}{\partial x^2} + k \tanh(u) = f, \qquad x \in [-0.7, 0.7] \end{gathered} \] where \(\lambda = 0.01, k=0.7\) and \(u=\sin^3(6x)\)










\[ \begin{gathered} \lambda \left(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\right) + u\left(u^2 -1 \right) = f, \qquad x,y \in [-1, 1] \end{gathered} \] where \(\lambda = 0.01\) and \(u=\sin(\pi x)\sin(\pi y)\)


2 neural networks: \(u_{NN}\) and \(f_{NN}\)
3 hidden layers with 200 neurons each
\(\tanh\) activation function
ADAM optimizer
\(10^{-3}\) learning rate
Xavier normalization
50000 epochs
2000 outputs








\[ \begin{gathered} \lambda \frac{\partial^2 u}{\partial x^2} + k \tanh(u) = f, \qquad x \in [-0.7, 0.7] \end{gathered} \] where \(\lambda = 0.01\).
\(k=[???, ???, ???, \dots, ???]\) with \(N\) entries corresponding to \(N\) outputs of the MO-PINN










\[ \begin{gathered} \lambda \left(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\right) + u\left(u^2 -1 \right) = f, \qquad x,y \in [-1, 1] \end{gathered} \] where \(\lambda = 0.01\) and \(u=\sin(\pi x)\sin(\pi y)\)
\(k=[???, ???, ???, \dots, ???]\) with \(N\) entries corresponding to \(N\) outputs of the MO-PINN








Quantile-quantile plot of \(u\) at 9 locations






DiReCT Annual Review Meeting - August 18, 2023