Identifiability is a desirable property of a statistical model: given a consistent estimator, it guarantees recovery of the true model parameters in the limit. Deep neural networks lack identifiability in this sense, because various invariances obtain in their (over-)parameterization. In this talk, I aim to rehabilitate identifiability by proving a surprising result: a large class of supervised and self-supervised models are in fact identifiable in function space, up to a linear indeterminacy.
Deep learning is increasingly characterized by efficient function approximators trained on growing quantities of data. This trend interfaces with the asymptotic character of identifiability in a pleasing way: as network architectures improve and more data becomes available, we expect the representation functions approximated by deep neural networks to approach a stable set of optima in function space.
After a brief review of classical identifiability as well as its recent extensions, I will derive sufficient conditions for linear identifiability, and present empirical support for the result on both synthetic and real-world data.
The content of this talk represents work done jointly with Diederik P. Kingma and Luke Metz. An early version of the result was presented at the Conference on the Mathematical Theory of Deep Neural Networks (2019).