[course site]
Santiago Pascual de la Puente
santi.pascual@upc.edu
PhD Candidate
Universitat Politecnica de Catalunya
Technical University of Catalonia
Deep Generative Models II
#DLUPC
Outline
● Introduction
● Taxonomy
● PixelCNN & Wavenet
● Variational Auto-Encoders (VAEs)
● Generative Adversarial Networks (GANs)
● Conclusions
2
Recap from previous lecture...
Key Idea: our model cares about what distribution generated the input data
points, and we want to mimic it with our probabilistic model. Our learned
model should be able to make up new samples from the distribution, not
just copy and paste existing samples!
4
What is a generative model?
Figure from NIPS 2016 Tutorial: Generative Adversarial Networks (I. Goodfellow)
5
Model the probability density function:
● Explicitly
○ With tractable density → PixelRNN, PixelCNN and
Wavenet
○ With approximate density → Variational
Auto-Encoders
● Implicitly
○ Generative Adversarial Networks
Taxonomy
PixelCNN: Factorizing the joint distribution
● Model explicitly the joint probability distribution of data streams x as a product of
element-wise conditional distributions for each element xi
in the stream.
○ Example: An image x of size (n, n) is decomposed scanning pixels in raster mode
(Row by row and pixel by pixel within every row)
○ Apply probability chain rule: xi
is the i-th pixel in the image.
6
Variational Auto-Encoder
z
Encode Decode
We can compose our encoder - decoder setup, and place our VAE losses to regularize and reconstruct.
7
Generative Adversarial Networks
The GAN Epidemic
Figure credit: https://github.com/hindupuravinash/the-gan-zoo
10
Generative Adversarial Networks (GANs)
We have two modules: Generator (G) and Discriminator (D).
● They “fight” against each other during training→ Adversarial Training
● G mission: Fool D to missclassify.
● D mission: Discriminate between G samples and real samples.
The generator
Deterministic mapping from a latent random vector to sample from Pmodel (or
Pg), which should be similar to Pdata
E.g. DCGAN:
11
The discriminator
Parameterised function that tries to distinguish between real samples from Pdata
and generated ones from Pmodel.
conv
conv
...
F F
12
Adversarial Training (conceptual)
Generator
Real world
samples
Database
Discriminator
Real
Loss
Latentrandomvariable
Sample
Sample
Fake
13
z
D determines database images are Real,
whereas generated ones are Fake .
Adversarial Training
14
We have networks G and D, and training set with pdf Pdata. Notation:
● θ(G), θ(D) (Parameters of model G and D respectively)
● x ~ Pdata (M-dim sample from training data pdf)
● z ~ N(0, I) (sample from prior pdf, e.g. N-dim normal)
● G(z) = ẍ ~ Pmodel (M-dim sample from G network)
D network receives x or ẍ inputs → decides whether input is real or fake. It is optimized to learn: x is real
(1), ẍ is fake (0) (binary classifier).
G network maps sample z to G(z) = ẍ → it is optimized to maximize D mistakes.
NIPS 2016 Tutorial: Generative Adversarial Networks. Ian Goodfellow
Adversarial Training (batch update) (1)
● Pick a sample x from training set
● Show x to D and update weights to
output 1 (real)
Adversarial Training (batch update) (2)
● G maps sample z to ẍ
● show ẍ and update weights to output 0 (fake)
Adversarial Training (batch update) (3)
● Freeze D weights
● Update G weights to make D output 1 (just G weights!)
● Unfreeze D Weights and repeat
Discriminator
training
Generator
training
18
Discriminator
training
Generator
training
19
Q: BUT WHY K
DISCRIMINATOR
UPDATES?
Discriminator
training
Generator
training
20
Q: BUT WHY K
DISCRIMINATOR
UPDATES?
A: WE WANT
STRONG
DISCRIMINATIVE
FEATURES TO
BACKPROP TO
GENERATOR.
Training GANs dynamics
Iterate these two steps until convergence (which may not happen)
● Updating the discriminator should make it better at discriminating between real images and
generated ones (discriminator improves)
● Updating the generator makes it better at fooling the current discriminator (generator improves)
Eventually (we hope) that the generator gets so good that it is impossible for the discriminator to tell the
difference between real and generated images. Discriminator accuracy = 0.5
21
Adversarial Training Analogy: is it fake money?
Imagine we have a counterfeiter (G) trying to make fake money, and the police (D)
has to detect whether money is real or fake.
Key Idea: D is trained to detect fraud, (its parameters learn discriminative features
of “what is real/fake”). As backprop goes through D to G there happens to be
information leaking about the requirements for bank notes to look real. This makes
G perform small corrections to get closer and closer to what real samples would be.
Adversarial Training Analogy: is it fake money?
Imagine we have a counterfeiter (G) trying to make fake money, and the police (D)
has to detect whether money is real or fake.
Key Idea: D is trained to detect fraud, (its parameters learn discriminative features
of “what is real/fake”). As backprop goes through D to G there happens to be
information leaking about the requirements for bank notes to look real. This makes
G perform small corrections to get closer and closer to what real samples would be.
Caveat: this means GANs are not suitable for discrete tokens
predictions (e.g. words) → in that discrete space there is no
“small change” criteria to get to a neighbour (but can work in a
word embedding space for example).
Adversarial Training Analogy: is it fake money?
Imagine we have a counterfeiter (G) trying to make fake money, and the police (D)
has to detect whether money is real or fake.
100
100
FAKE: It’s
not even
green
Adversarial Training Analogy: is it fake money?
Imagine we have a counterfeiter (G) trying to make fake money, and the police (D)
has to detect whether money is real or fake.
100
100
FAKE:
There is no
watermark
Adversarial Training Analogy: is it fake money?
Imagine we have a counterfeiter (G) trying to make fake money, and the police (D)
has to detect whether money is real or fake.
100
100
FAKE:
Watermark
should be
rounded
Adversarial Training Analogy: is it fake money?
Imagine we have a counterfeiter (G) trying to make fake money, and the police (D)
has to detect whether money is real or fake.
After enough iterations, and if the counterfeiter is good enough (in terms of G
network it means “has enough parameters”), the police should be confused.
REAL?
FAKE?
Conditional GANs
GANs can be conditioned on other info extra to z:
text, labels, speech, etc..
z might capture random characteristics of the data
(variabilities of plausible futures), whilst c would
condition the deterministic parts !
For details on ways to condition GANs:
Ways of Conditioning Generative
Adversarial Networks (Wack et al.)
Conditional GANs
GANs can be conditioned on other info extra to z: text, labels, speech, etc..
z might capture random characteristics of the data (variabilities of plausible
futures), whilst c would condition the deterministic parts !
For details on ways to condition GANs:
Ways of Conditioning Generative
Adversarial Networks (Wack et al.)
Where is the downside...?
GANs are tricky and hard to train! We do not want to minimize a cost function.
Instead we want both networks to reach Nash equilibria (saddle point).
● Formulated as a “game” between two networks
● Unstable dynamics: hard to keep generator and discriminator in balance
● Optimization can oscillate between solutions
● Generator can collapse
Caveats
Where is the downside...?
GANs are tricky and hard to train! We do not want to minimize a cost function.
Instead we want both networks to reach Nash equilibria (saddle point).
Caveats
Because of extensive experience within the GAN community (with some
does-not-work-frustration from time to time), you can find some tricks and
tips on how to train a vanilla GAN here:
https://github.com/soumith/ganhacks
Generator collapse = parameter setting where it always emits sample sample.
● When collapse is imminent, gradient of D may point in similar directions for
many similar points → There is no coordination b/w D gradients as it
processes each sample independently in the minibatch.
Solving the collapse: Minibatch Discrimination
FAKE
FAKE
FAKE
FAKE
FAKE
DISCRIMINATORB
(Salimans et al. 2016)
Generator collapse = parameter setting where it always emits sample sample.
● When collapse is imminent, gradient of D may point in similar directions for
many similar points → There is no coordination b/w D gradients as it
processes each sample independently in the minibatch.
Solving the collapse: Minibatch Discrimination
FAKE
FAKE
FAKE
FAKE
FAKE
DISCRIMINATOR
How to tell G outputs to be
dissimilar if D makes B
independent decisions?
B
(Salimans et al. 2016)
Generator collapse = parameter setting where it always emits sample sample.
● Looking at multiple examples combined could potentially help avoiding
collapse of generator → model the closeness b/w examples in a minibatch
Solving the collapse: Minibatch Discrimination
L1
L2
L3
L4
Pick D features in
an intermediate
layer
(Salimans et al. 2016)
Generator collapse = parameter setting where it always emits sample sample.
● Looking at multiple examples combined could potentially help avoiding
collapse of generator → model the closeness b/w examples in a minibatch
Solving the collapse: Minibatch Discrimination
L1
L2
L3
L4
Pick D features in
an intermediate
layer
Multiply by
learnable tensor T
(Salimans et al. 2016)
Generator collapse = parameter setting where it always emits sample sample.
● Looking at multiple examples combined could potentially help avoiding
collapse of generator → model the closeness b/w examples in a minibatch
Solving the collapse: Minibatch Discrimination
L1
L2
L3
L4
Pick D features in
an intermediate
layer
Multiply by
learnable tensor T
Obtain Matrix M
(Salimans et al. 2016)
Generator collapse = parameter setting where it always emits sample sample.
● Looking at multiple examples combined could potentially help avoiding
collapse of generator → model the closeness b/w examples in a minibatch
Solving the collapse: Minibatch Discrimination
Introduce a notion of distance between rows of
interaction matrix M
(Salimans et al. 2016)
Generator collapse = parameter setting where it always emits sample sample.
● Looking at multiple examples combined could potentially help avoiding
collapse of generator → model the closeness b/w examples in a minibatch
Solving the collapse: Minibatch Discrimination
L1
L2
L3
L4
Minibatch Discrimination
,
concat
(Salimans et al. 2016)
Generator collapse = parameter setting where it always emits sample sample.
● Looking at multiple examples combined could potentially help avoiding
collapse of generator → model the closeness b/w examples in a minibatch
Solving the collapse: Minibatch Discrimination
L1
L2
L3
L4
Minibatch Discrimination
,
concat
Discriminator still classifies REAL/FAKE, but it has
side information to do so that tries to introduce a
notion of diversity in the batch.
(Salimans et al. 2016)
Batch Normalization is a technique to re-standardize every layer output distribution
to be N(0, I).
Virtual Batch Normalization
HiddenLayer
(Ioffe & Szegedy 2015)
We normalize each k dimension
with minibatch statistics
B
K features (hidden size)
Batch Normalization provokes intra-batch correlation, thus being G prone to mode
collapse → we can use a ref batch to smooth the statistics of our minibatch:
Virtual Batch Normalization
HiddenLayer
We normalize each k dimension
with minibatch statistics
2B
K features (hidden size)
reference
(Salimans et al. 2016)
Least Squares GAN
Main idea: shift to loss function that provides smooth & non-saturating gradients in D
● Because of sigmoid saturation in binary classification loss, G gets no info when
D gets to label true examples → vanishing gradients make G no learn
● Least squares loss improves learning with notion of distance of Pmodel to Pdata:
Least Squares Generative Adversarial
Networks, Mao et al. 2016
Other GAN implementations
Other GAN implementations try to stabilize learning dynamics, imposing new
divergences to be measured by D → gradients flow better and loss gets correlated
with generation quality:
● Wasserstein GAN (Arjovsky et al. 2017)
● BEGAN: Boundary Equilibrium Generative Adversarial Networks (Berthelot et al. 2017)
● Improved Training of Wasserstein GANs (Gulrajani et al. 2017)
GAN Applications
So far GANs have been extensively used in computer vision tasks:
● Generating images/generating video frames.
● Unsupervised feature extraction. Representation learning.
● Manipulating images (like a photoshop advanced level).
● Image coding/Super Resolution.
● Transferring image styles.
But now they’re extending to other fields, like speech!
● Speech Enhancement (Waveform)
● Unpaired Voice Conversion (Spectrum)
● Speech synthesis post-filtering (Spectrum)
Generating images/frames
(Radford et al. 2015)
Deep Conv. GAN (DCGAN) effectively generated 64x64 RGB images in a single
shot.
fully
connected Strided
transposed
conv layers
Conv layers,
no max-pool
fully
connected
Generating images/frames
(Radford et al. 2015)
Deep Conv. GAN (DCGAN) effectively generated 64x64 RGB images in a single
shot. For example bedrooms from LSUN dataset.
Generating images/frames conditioned on captions
Generative Adversarial Text to Image Synthesis (Reed et al. 2016b)
Generating images/frames conditioned on captions
StackGAN (Zhang et al. 2016). Increased resolution to 256x256,
conceptual two-stage: ‘Draft’ and ‘Fine-grained detalis’
Generating images/frames conditioned on captions
(Reed et al. 2016b) (Zhang et al. 2016)
Unsupervised feature extraction/learning representations
Similarly to word2vec, GANs learn a distributed representation that disentangles
concepts such that we can perform semantic operations on the data manifold:
v(Man with glasses) - v(man) + v(woman) = v(woman with glasses)
(Radford et al. 2015)
Image super-resolution
Bicubic: not using data statistics. SRResNet: trained with MSE. SRGAN is able to
understand that there are multiple correct answers, rather than averaging.
(Ledig et al. 2016)
Image super-resolution
Averaging is a serious problem we face when dealing with complex distributions.
(Ledig et al. 2016)
Manipulating images and assisted content creation
https://youtu.be/9c4z6YsBGQ0?t=126
(Zhu et al. 2016)
Speech Enhancement
Speech Enhancement GAN (Pascual et al. 2017)
Speech Enhancement
Samples available: http://veu.talp.cat/segan/
Conclusions
Conclusions
● Generative models are built to learn the underlying structures hidden in our high-dimensional
data.
● There are currently three main deep generative models: pixelCNN, VAE and GAN.
● PixelCNN factorizes an explicitly known discrete distribution with probability chain rule
○ They are the slowest generative models for their recursive nature.
● VAEs and GANs are capable of factorizing our highly-complex PDFs into a simpler prior of our
choice z.
● Once we trained VAEs with variational lower bound, we can generate new samples from our
learned z manifold.
● GANs use an implicit learning method to mimic our data distribution Pdata.
○ GANs are the sharpiest generators at the moment, and a very active research field.
○ They are the hardest ones to train though, because of their equilibria dynamics.
Thanks! Questions?
References
59
● NIPS 2016 Tutorial: Generative Adversarial Networks (Goodfellow 2016)
● Pixel Recurrent Neural Networks (van den Oord et al. 2016)
● Conditional Image Generation with PixelCNN Decoders (van den Oord et al. 2016)
● Auto-Encoding Variational Bayes (Kingma & Welling 2013)
● https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/
● https://jaan.io/what-is-variational-autoencoder-vae-tutorial/
● Tutorial on Variational Autoencoders (Doersch 2016)
● Improved Techniques for Training GANs (Salimans et al. 2016)
● Generative Adversarial Networks: An Overview (Creswell et al. 2017)
● Generative Adversarial Networks (Goodfellow et al. 2014)

Deep Generative Models II (DLAI D10L1 2017 UPC Deep Learning for Artificial Intelligence)

  • 1.
    [course site] Santiago Pascualde la Puente santi.pascual@upc.edu PhD Candidate Universitat Politecnica de Catalunya Technical University of Catalonia Deep Generative Models II #DLUPC
  • 2.
    Outline ● Introduction ● Taxonomy ●PixelCNN & Wavenet ● Variational Auto-Encoders (VAEs) ● Generative Adversarial Networks (GANs) ● Conclusions 2
  • 3.
  • 4.
    Key Idea: ourmodel cares about what distribution generated the input data points, and we want to mimic it with our probabilistic model. Our learned model should be able to make up new samples from the distribution, not just copy and paste existing samples! 4 What is a generative model? Figure from NIPS 2016 Tutorial: Generative Adversarial Networks (I. Goodfellow)
  • 5.
    5 Model the probabilitydensity function: ● Explicitly ○ With tractable density → PixelRNN, PixelCNN and Wavenet ○ With approximate density → Variational Auto-Encoders ● Implicitly ○ Generative Adversarial Networks Taxonomy
  • 6.
    PixelCNN: Factorizing thejoint distribution ● Model explicitly the joint probability distribution of data streams x as a product of element-wise conditional distributions for each element xi in the stream. ○ Example: An image x of size (n, n) is decomposed scanning pixels in raster mode (Row by row and pixel by pixel within every row) ○ Apply probability chain rule: xi is the i-th pixel in the image. 6
  • 7.
    Variational Auto-Encoder z Encode Decode Wecan compose our encoder - decoder setup, and place our VAE losses to regularize and reconstruct. 7
  • 8.
  • 9.
    The GAN Epidemic Figurecredit: https://github.com/hindupuravinash/the-gan-zoo
  • 10.
    10 Generative Adversarial Networks(GANs) We have two modules: Generator (G) and Discriminator (D). ● They “fight” against each other during training→ Adversarial Training ● G mission: Fool D to missclassify. ● D mission: Discriminate between G samples and real samples.
  • 11.
    The generator Deterministic mappingfrom a latent random vector to sample from Pmodel (or Pg), which should be similar to Pdata E.g. DCGAN: 11
  • 12.
    The discriminator Parameterised functionthat tries to distinguish between real samples from Pdata and generated ones from Pmodel. conv conv ... F F 12
  • 13.
    Adversarial Training (conceptual) Generator Realworld samples Database Discriminator Real Loss Latentrandomvariable Sample Sample Fake 13 z D determines database images are Real, whereas generated ones are Fake .
  • 14.
    Adversarial Training 14 We havenetworks G and D, and training set with pdf Pdata. Notation: ● θ(G), θ(D) (Parameters of model G and D respectively) ● x ~ Pdata (M-dim sample from training data pdf) ● z ~ N(0, I) (sample from prior pdf, e.g. N-dim normal) ● G(z) = ẍ ~ Pmodel (M-dim sample from G network) D network receives x or ẍ inputs → decides whether input is real or fake. It is optimized to learn: x is real (1), ẍ is fake (0) (binary classifier). G network maps sample z to G(z) = ẍ → it is optimized to maximize D mistakes. NIPS 2016 Tutorial: Generative Adversarial Networks. Ian Goodfellow
  • 15.
    Adversarial Training (batchupdate) (1) ● Pick a sample x from training set ● Show x to D and update weights to output 1 (real)
  • 16.
    Adversarial Training (batchupdate) (2) ● G maps sample z to ẍ ● show ẍ and update weights to output 0 (fake)
  • 17.
    Adversarial Training (batchupdate) (3) ● Freeze D weights ● Update G weights to make D output 1 (just G weights!) ● Unfreeze D Weights and repeat
  • 18.
  • 19.
  • 20.
    Discriminator training Generator training 20 Q: BUT WHYK DISCRIMINATOR UPDATES? A: WE WANT STRONG DISCRIMINATIVE FEATURES TO BACKPROP TO GENERATOR.
  • 21.
    Training GANs dynamics Iteratethese two steps until convergence (which may not happen) ● Updating the discriminator should make it better at discriminating between real images and generated ones (discriminator improves) ● Updating the generator makes it better at fooling the current discriminator (generator improves) Eventually (we hope) that the generator gets so good that it is impossible for the discriminator to tell the difference between real and generated images. Discriminator accuracy = 0.5 21
  • 22.
    Adversarial Training Analogy:is it fake money? Imagine we have a counterfeiter (G) trying to make fake money, and the police (D) has to detect whether money is real or fake. Key Idea: D is trained to detect fraud, (its parameters learn discriminative features of “what is real/fake”). As backprop goes through D to G there happens to be information leaking about the requirements for bank notes to look real. This makes G perform small corrections to get closer and closer to what real samples would be.
  • 23.
    Adversarial Training Analogy:is it fake money? Imagine we have a counterfeiter (G) trying to make fake money, and the police (D) has to detect whether money is real or fake. Key Idea: D is trained to detect fraud, (its parameters learn discriminative features of “what is real/fake”). As backprop goes through D to G there happens to be information leaking about the requirements for bank notes to look real. This makes G perform small corrections to get closer and closer to what real samples would be. Caveat: this means GANs are not suitable for discrete tokens predictions (e.g. words) → in that discrete space there is no “small change” criteria to get to a neighbour (but can work in a word embedding space for example).
  • 24.
    Adversarial Training Analogy:is it fake money? Imagine we have a counterfeiter (G) trying to make fake money, and the police (D) has to detect whether money is real or fake. 100 100 FAKE: It’s not even green
  • 25.
    Adversarial Training Analogy:is it fake money? Imagine we have a counterfeiter (G) trying to make fake money, and the police (D) has to detect whether money is real or fake. 100 100 FAKE: There is no watermark
  • 26.
    Adversarial Training Analogy:is it fake money? Imagine we have a counterfeiter (G) trying to make fake money, and the police (D) has to detect whether money is real or fake. 100 100 FAKE: Watermark should be rounded
  • 27.
    Adversarial Training Analogy:is it fake money? Imagine we have a counterfeiter (G) trying to make fake money, and the police (D) has to detect whether money is real or fake. After enough iterations, and if the counterfeiter is good enough (in terms of G network it means “has enough parameters”), the police should be confused. REAL? FAKE?
  • 28.
    Conditional GANs GANs canbe conditioned on other info extra to z: text, labels, speech, etc.. z might capture random characteristics of the data (variabilities of plausible futures), whilst c would condition the deterministic parts ! For details on ways to condition GANs: Ways of Conditioning Generative Adversarial Networks (Wack et al.)
  • 29.
    Conditional GANs GANs canbe conditioned on other info extra to z: text, labels, speech, etc.. z might capture random characteristics of the data (variabilities of plausible futures), whilst c would condition the deterministic parts ! For details on ways to condition GANs: Ways of Conditioning Generative Adversarial Networks (Wack et al.)
  • 30.
    Where is thedownside...? GANs are tricky and hard to train! We do not want to minimize a cost function. Instead we want both networks to reach Nash equilibria (saddle point). ● Formulated as a “game” between two networks ● Unstable dynamics: hard to keep generator and discriminator in balance ● Optimization can oscillate between solutions ● Generator can collapse Caveats
  • 31.
    Where is thedownside...? GANs are tricky and hard to train! We do not want to minimize a cost function. Instead we want both networks to reach Nash equilibria (saddle point). Caveats Because of extensive experience within the GAN community (with some does-not-work-frustration from time to time), you can find some tricks and tips on how to train a vanilla GAN here: https://github.com/soumith/ganhacks
  • 32.
    Generator collapse =parameter setting where it always emits sample sample. ● When collapse is imminent, gradient of D may point in similar directions for many similar points → There is no coordination b/w D gradients as it processes each sample independently in the minibatch. Solving the collapse: Minibatch Discrimination FAKE FAKE FAKE FAKE FAKE DISCRIMINATORB (Salimans et al. 2016)
  • 33.
    Generator collapse =parameter setting where it always emits sample sample. ● When collapse is imminent, gradient of D may point in similar directions for many similar points → There is no coordination b/w D gradients as it processes each sample independently in the minibatch. Solving the collapse: Minibatch Discrimination FAKE FAKE FAKE FAKE FAKE DISCRIMINATOR How to tell G outputs to be dissimilar if D makes B independent decisions? B (Salimans et al. 2016)
  • 34.
    Generator collapse =parameter setting where it always emits sample sample. ● Looking at multiple examples combined could potentially help avoiding collapse of generator → model the closeness b/w examples in a minibatch Solving the collapse: Minibatch Discrimination L1 L2 L3 L4 Pick D features in an intermediate layer (Salimans et al. 2016)
  • 35.
    Generator collapse =parameter setting where it always emits sample sample. ● Looking at multiple examples combined could potentially help avoiding collapse of generator → model the closeness b/w examples in a minibatch Solving the collapse: Minibatch Discrimination L1 L2 L3 L4 Pick D features in an intermediate layer Multiply by learnable tensor T (Salimans et al. 2016)
  • 36.
    Generator collapse =parameter setting where it always emits sample sample. ● Looking at multiple examples combined could potentially help avoiding collapse of generator → model the closeness b/w examples in a minibatch Solving the collapse: Minibatch Discrimination L1 L2 L3 L4 Pick D features in an intermediate layer Multiply by learnable tensor T Obtain Matrix M (Salimans et al. 2016)
  • 37.
    Generator collapse =parameter setting where it always emits sample sample. ● Looking at multiple examples combined could potentially help avoiding collapse of generator → model the closeness b/w examples in a minibatch Solving the collapse: Minibatch Discrimination Introduce a notion of distance between rows of interaction matrix M (Salimans et al. 2016)
  • 38.
    Generator collapse =parameter setting where it always emits sample sample. ● Looking at multiple examples combined could potentially help avoiding collapse of generator → model the closeness b/w examples in a minibatch Solving the collapse: Minibatch Discrimination L1 L2 L3 L4 Minibatch Discrimination , concat (Salimans et al. 2016)
  • 39.
    Generator collapse =parameter setting where it always emits sample sample. ● Looking at multiple examples combined could potentially help avoiding collapse of generator → model the closeness b/w examples in a minibatch Solving the collapse: Minibatch Discrimination L1 L2 L3 L4 Minibatch Discrimination , concat Discriminator still classifies REAL/FAKE, but it has side information to do so that tries to introduce a notion of diversity in the batch. (Salimans et al. 2016)
  • 40.
    Batch Normalization isa technique to re-standardize every layer output distribution to be N(0, I). Virtual Batch Normalization HiddenLayer (Ioffe & Szegedy 2015) We normalize each k dimension with minibatch statistics B K features (hidden size)
  • 41.
    Batch Normalization provokesintra-batch correlation, thus being G prone to mode collapse → we can use a ref batch to smooth the statistics of our minibatch: Virtual Batch Normalization HiddenLayer We normalize each k dimension with minibatch statistics 2B K features (hidden size) reference (Salimans et al. 2016)
  • 42.
    Least Squares GAN Mainidea: shift to loss function that provides smooth & non-saturating gradients in D ● Because of sigmoid saturation in binary classification loss, G gets no info when D gets to label true examples → vanishing gradients make G no learn ● Least squares loss improves learning with notion of distance of Pmodel to Pdata: Least Squares Generative Adversarial Networks, Mao et al. 2016
  • 43.
    Other GAN implementations OtherGAN implementations try to stabilize learning dynamics, imposing new divergences to be measured by D → gradients flow better and loss gets correlated with generation quality: ● Wasserstein GAN (Arjovsky et al. 2017) ● BEGAN: Boundary Equilibrium Generative Adversarial Networks (Berthelot et al. 2017) ● Improved Training of Wasserstein GANs (Gulrajani et al. 2017)
  • 44.
    GAN Applications So farGANs have been extensively used in computer vision tasks: ● Generating images/generating video frames. ● Unsupervised feature extraction. Representation learning. ● Manipulating images (like a photoshop advanced level). ● Image coding/Super Resolution. ● Transferring image styles. But now they’re extending to other fields, like speech! ● Speech Enhancement (Waveform) ● Unpaired Voice Conversion (Spectrum) ● Speech synthesis post-filtering (Spectrum)
  • 45.
    Generating images/frames (Radford etal. 2015) Deep Conv. GAN (DCGAN) effectively generated 64x64 RGB images in a single shot. fully connected Strided transposed conv layers Conv layers, no max-pool fully connected
  • 46.
    Generating images/frames (Radford etal. 2015) Deep Conv. GAN (DCGAN) effectively generated 64x64 RGB images in a single shot. For example bedrooms from LSUN dataset.
  • 47.
    Generating images/frames conditionedon captions Generative Adversarial Text to Image Synthesis (Reed et al. 2016b)
  • 48.
    Generating images/frames conditionedon captions StackGAN (Zhang et al. 2016). Increased resolution to 256x256, conceptual two-stage: ‘Draft’ and ‘Fine-grained detalis’
  • 49.
    Generating images/frames conditionedon captions (Reed et al. 2016b) (Zhang et al. 2016)
  • 50.
    Unsupervised feature extraction/learningrepresentations Similarly to word2vec, GANs learn a distributed representation that disentangles concepts such that we can perform semantic operations on the data manifold: v(Man with glasses) - v(man) + v(woman) = v(woman with glasses) (Radford et al. 2015)
  • 51.
    Image super-resolution Bicubic: notusing data statistics. SRResNet: trained with MSE. SRGAN is able to understand that there are multiple correct answers, rather than averaging. (Ledig et al. 2016)
  • 52.
    Image super-resolution Averaging isa serious problem we face when dealing with complex distributions. (Ledig et al. 2016)
  • 53.
    Manipulating images andassisted content creation https://youtu.be/9c4z6YsBGQ0?t=126 (Zhu et al. 2016)
  • 54.
    Speech Enhancement Speech EnhancementGAN (Pascual et al. 2017)
  • 55.
    Speech Enhancement Samples available:http://veu.talp.cat/segan/
  • 56.
  • 57.
    Conclusions ● Generative modelsare built to learn the underlying structures hidden in our high-dimensional data. ● There are currently three main deep generative models: pixelCNN, VAE and GAN. ● PixelCNN factorizes an explicitly known discrete distribution with probability chain rule ○ They are the slowest generative models for their recursive nature. ● VAEs and GANs are capable of factorizing our highly-complex PDFs into a simpler prior of our choice z. ● Once we trained VAEs with variational lower bound, we can generate new samples from our learned z manifold. ● GANs use an implicit learning method to mimic our data distribution Pdata. ○ GANs are the sharpiest generators at the moment, and a very active research field. ○ They are the hardest ones to train though, because of their equilibria dynamics.
  • 58.
  • 59.
    References 59 ● NIPS 2016Tutorial: Generative Adversarial Networks (Goodfellow 2016) ● Pixel Recurrent Neural Networks (van den Oord et al. 2016) ● Conditional Image Generation with PixelCNN Decoders (van den Oord et al. 2016) ● Auto-Encoding Variational Bayes (Kingma & Welling 2013) ● https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/ ● https://jaan.io/what-is-variational-autoencoder-vae-tutorial/ ● Tutorial on Variational Autoencoders (Doersch 2016) ● Improved Techniques for Training GANs (Salimans et al. 2016) ● Generative Adversarial Networks: An Overview (Creswell et al. 2017) ● Generative Adversarial Networks (Goodfellow et al. 2014)