Abstract: New recording technologies are transforming neuroscience, allowing us to precisely quantify neural activity and natural behavior. To realize this potential, we need computational methods that can reveal simplifying structure in high dimensional neural and behavioral time series and draw connections between these domains. Our methods must balance two contrasting objectives: we seek interpretable representations of the data but also accurate predictive models. I will present recent progress toward this goal with hierarchical and recurrent models, and show example applications in larval zebrafish and C. elegans. In both examples, we blend structured, hierarchical models for representation learning with powerful predictive tools, like convolutional and recurrent neural networks. Alongside these examples, I will discuss the Bayesian inference algorithms necessary to fit these models at scale. Finally, I will conclude with an outlook for how these models can be grounded in theory, offering a path toward a more mechanistic understanding of neural computation and behavior.