Progress in contemporary generative modeling for continuous-valued data has been driven by the development of algorithms, such as score-based diffusion, that are built on dynamical transport of measure. I will describe a project of research with my collaborators to build a general paradigm for dynamical generative modeling which we call stochastic interpolants. Crucially, these methods rely on access to large amounts of data from the target distribution. In that regard, a dual problem to this one is learning to sample from a target distribution only through access to the unnormalized density and its gradient. In the latter half of this talk, I will describe recent results in learning samplers for this problem built on dynamical transport and Jarzynski’s equality in non-equilibrium thermodynamics. Interestingly, the learning algorithms for these samplers do not require backpropagation through the simulation of the dynamical system.