Video: Large Scale Training for Model Optimization

Print Friendly, PDF & Email

In this video from PASC18, Jakub Tomczak from the University of Amsterdam presents: The Success of Deep Generative Models.

“Deep generative models allow us to learn hidden representations of data and generate new examples. There are two major families of models that are exploited in current applications: Generative Adversarial Networks (GANs), and Variational Auto-Encoders (VAE). The principle of GANs is to train a generator that can generate examples from random noise, in adversary of a discriminative model that is forced to confuse true samples from generated ones. Generated images by GANs are very sharp and detailed. The biggest disadvantage of GANs is that they are trained through solving a minimax optimization problem that causes significant learning instability issues. VAEs are based on a fully probabilistic perspective of the variational inference. The learning problem aims at maximizing the variational lower bound for a given family of variational posteriors. The model can be trained by backpropagation but it was noticed that the resulting generated images are rather blurry. However, VAEs are probabilistic models, thus, they could be incorporated in almost any probabilistic framework. We will discuss basics of both approaches and present recent extensions. We will point out advantages and disadvantages of GANs and VAE. Some of most promising applications of deep generative models will be shown.”

See more talks in the PASC18 Video Gallery

PASC19 takes place June 12-14, 2019 in Zurich, Switzerland.

Check out our insideHPC Events Calendar