Abstract: Deep probabilistic generative models are flexible models of complex and high dimensional data. They have found numerous applications, e.g. vision, natural language processing, chemistry, biology, and physics. They are also important components of model-based reinforcement learning algorithms. This widespread use of deep generative models urges the need to understand how they work and where they fall short. In this talk I will discuss two main learning frameworks for deep generative models, the variational autoencoder (VAE) and the generative adversarial network (GAN). I will highlight their shortcomings and introduce two of my works that propose a solution to those shortcomings. More specifically, I will discuss my latest works on reweighted expectation maximization and entropy-regularized adversarial learning as alternatives to the VAE and the GAN approaches respectively.
Bio: Adji is a Ph.D candidate in the Department of Statistics at Columbia University where she is jointly being advised by David Blei and John Paisley. Her doctoral work is about deep generative models. More specifically, she designs algorithms for fitting deep generative models and combines hierarchical Bayes and deep learning to embed structure into deep generative models. Her work at Columbia is funded by a Columbia Dean Fellowship and a Google PhD Fellowship in Machine Learning. She has been recently named a rising star in Machine Learning by the University of Maryland.
Prior to joining Columbia she worked as a Junior Professional Associate at the World Bank. She did her undergraduate training in France where she attended Lycee Henri IV and Telecom ParisTech--France's Grandes Ecoles system. You can find her musing about all things Africa and Machine Learning on Twitter @adjiboussodieng.
Faculty Host: Sanmi Koyejo