What PINNs are
Physics‑Informed Neural Networks (PINNs) are neural networks trained to satisfy governing equations (ordinary or partial differential equations), boundary conditions, and initial conditions. Instead of relying solely on labeled data, they minimize a composite loss in which the main component is the residual of the differential equation at sampled collocation points in the domain.
In essence, a PINN predicts a function \(u(\mathbf{x}, t; \theta)\). During training, automatic or numerical differentiation is used to compute derivatives of \(u\), and the PDE residual \(R[u]\) is forced toward zero while also enforcing boundary/initial conditions. This often leads to data‑efficient learning and solutions that generalize across the domain.
- Physics term: Mean squared PDE residual \( \mathbb{E}[R[u]^2] \)
- Boundary term: Errors on boundaries \( \mathbb{E}[(u - u_{\partial\Omega})^2] \)
- Initial term: Errors at \(t=0\) or given initial state
- Optional data: Observations if you have them
By tuning the weights of these terms, you balance strict physics adherence with condition satisfaction and data fit.
How they work
- Neural ansatz: Choose a differentiable network \(u_\theta(x,t)\) (e.g., MLP with \(\tanh\)).
- Collocation: Sample points in the space‑time domain to evaluate the physics residual.
- Differentiation: Compute derivatives of \(u_\theta\) to build the PDE residual \( R[u_\theta] \).
- Composite loss: Sum physics, boundary, and initial losses (plus optional data loss).
- Optimization: Update \(\theta\) via gradient descent (SGD/Adam) until residuals are small.
Many PINN implementations rely on automatic differentiation. In this demo, we compute PDE derivatives w.r.t. inputs using accurate central differences to keep the browser implementation minimal and transparent.
Live demo: 1D heat equation PINN
We solve \( u_t = \alpha\,u_{xx} \) on \(x \in [0,1]\), \(t \in [0,T]\) with Dirichlet boundaries \(u(0,t)=u(1,t)=0\) and initial condition \(u(x,0)=\sin(\pi x)\). The exact solution is \( u(x,t) = e^{-\alpha\pi^2 t}\sin(\pi x) \). The network learns to satisfy the PDE residual and conditions without seeing the exact solution.
Domain: x horizontal, t vertical. Exact solution is sampled overlay for reference only; it is NOT used for training.
Tip: start training, then adjust weights and learning rate slowly. If training becomes unstable, pause and lower the rate.
Strengths and limits
- Pros: Data‑efficient, embeds physics priors, works with sparse/partial observations, continuous solutions.
- Cons: Optimization can be stiff, loss balancing is delicate, high‑dimensional or chaotic systems are challenging.
- Practical tips: Normalize inputs/outputs, use smooth activations (\(\tanh\), SiLU), schedule loss weights, add curriculum on residual sampling.
- Scarce data: When measurements are minimal but equations are known.
- Inverse problems: Estimate hidden parameters by embedding them as trainable variables.
- Surrogates: Learn fast approximations to expensive solvers for design loops.
Classical solvers remain gold standards for accuracy and stability; PINNs complement them when data or differentiability matters.