There has been significant recent work on solving PDEs using neural networks on infinite dimensional spaces. In this talk we consider two examples. First, we prove that transformers can effectively approximate the mean-field dynamics of interacting particle systems exhibiting collective behavior, which are fundamental in modeling phenomena across physics, biology, and engineering. We provide theoretical bounds on the approximation error and validate the findings through numerical simulations.
Second, we show that finite dimensional neural networks can be used to approximate eigenfunction for the Laplace Beltrami operator on manifolds. We provide quantitative insights into the number of neurons needed to learn spectral information and shed light on the non-convex optimization landscape of training.