Abstract: Convolutional neural networks (CNNs) are the go-to tool for signal processing tasks in machine learning. But how and why do they work so well? Using the basic guiding principles of CNNs, namely their convolutional structure, invariance properties, and multi-scale nature, we will discuss how the CNN architecture arises as a natural bi-product of these principles using the language of nonlinear signal processing. In doing so we will extract some core ideas that allow us to apply these types of algorithms in various contexts, including the multi-reference alignment inverse problem, generative models for textures, and supervised machine learning for quantum many particle systems. Time permitting, we will also discuss how these core ideas can be used to generalize CNNs to manifolds and graphs, while still being able to provide mathematical guarantees on the nature of the representation provided by these tools.
email firstname.lastname@example.org for password