Tuesday, January 29, 2019
01/29/2019 - 4:00pm
Abstract: New recording technologies are transforming neuroscience, allowing us to precisely quantify neural activity and natural behavior. To realize this potential, we need computational methods that can reveal simplifying structure in high dimensional neural and behavioral time series and draw connections between these domains. Our methods must balance two contrasting objectives: we seek interpretable representations of the data but also accurate predictive models. I will present recent progress toward this goal with hierarchical and recurrent models, and show example applications in larval zebrafish and C. elegans. In both examples, we blend structured, hierarchical models for representation learning with powerful predictive tools, like convolutional and recurrent neural networks. Alongside these examples, I will discuss the Bayesian inference algorithms necessary to fit these models at scale. Finally, I will conclude with an outlook for how these models can be grounded in theory, offering a path toward a more mechanistic understanding of neural computation and behavior.
01/29/2019 - 4:15pm
The hyperbolic structure on a surface can be changed by several natural operations. The easiest is to cut the surface open along a simple closed seperating geodesic and to reglue the two pieces with a twist. Such twist are examples of more general earthquakes deformations along geodesic laminations on the surface. In this talk we describe these natural operations. We introduce new operations which change the hyperbolic structure to a real projective structures and allow to move within the space of real projective structures on the surface.