For the Nantes Machine Learning
Meetup
By Hugo Mougard
On July, 3
Generative networks. Trained on unlabelled data. Useful to build priors (ie word2vec).
“What I cannot create, I do not understand.”
Busino game camperate spent odea In the bankaway of smarling the SingersMay , who kill that imvic Keray Pents of the same Reagun D Manging include a tudancs shat " His Zuith Dudget , the Denmbern In during the Uitational questio Divos from The ’ noth ronkies of She like Monday , of macunsuer S Solice Norkedin pring in since ThiS record ( 31. ) UBS ) and Ch It was not the annuas were plogr This will be us , the ect of DAN These leaded as most-worsd p2 a0 The time I paidOa South Cubry i Dour Fraps higs it was these del This year out howneed allowed lo Kaulna Seto consficutes to repor
Generative Adversarial Networks use a network instead of a problem-specific loss.
Minimax: train the two networks by maximizing opposite objectives
Maximize the probability $D(\mathcal{x})$ when the input is real data.
Minimize the probability $D(\mathcal{\tilde{x}})$ ($\max$ of $1 - p$) when the input is generated.
Maximize the probability $D(\mathcal{\tilde{x}})$ ($\max$ of $1 - p$) when the input is generated.
Let's have a look at the DCGAN Torch code.
GANs are extremely hard to train. WGANs fix that.
Change the role of the discriminator:
Add an encoder from input to code to be able to query the generator.
The discriminator now distinguishes between couples $(x, z)$.
Minor tweak on WGANs that allowed very easy training of GANs in multiple domains (text, image, …)