When and How Can Deep Generative Models be Inverted?

Seminar: 
Applied Mathematics
Event time: 
Monday, December 14, 2020 - 2:30pm
Location: 
Zoom Meeting ID: 97670014308
Speaker: 
Dror Simon & Aviad Aberman
Speaker affiliation: 
Technion
Event description: 

Abstract:  Deep generative models (e.g. GANs and VAEs) have been developed quite extensively in recent years. Lately, there has been an increased interest in the inversion of such a model, i.e. given a (possibly corrupted) signal, we wish to recover the latent vector that generated it. Building upon sparse representation theory, we define conditions that rely only on the cardinalities of the hidden layer and are applicable to any inversion algorithm (gradient descent, deep encoder, etc.), under which such generative models are invertible with a unique solution. Importantly, the proposed analysis is applicable to any trained model, and does not depend on Gaussian i.i.d. weights. Furthermore, we introduce two layer-wise inversion pursuit algorithms for trained generative networks of arbitrary depth, where one of them is accompanied by recovery guarantees. Finally, we validate our theoretical results numerically and show that our method outperforms gradient descent when inverting such generators, both for clean and corrupted signals.

email tatianna.curtis@yale.edu for password