Introduction To Gan
Refer to Understanding GANs.
A good visualization can be found here.
Outline
How do we generate random variables from a distribution?
How can GANs be expressed as a random variable generation problem?
What are matching-based generative networks?
How do we implement a GAN?
Generating Random Variables
Use pseudorandom number generator
Generates sequence of numbers that approximately follow a uniform random distribution between 0 and 1
Review: Uniform random distribution is one where all outcomes are equally probable. → (Notice how is not dependent on the value of , so for all values, is the same)
How do we generate random variables?
Different techniques: Inverse transform method, Rejection sampling, Metropolis-Hasting algorithm
Inverse Transform Method
Let X be a random variable we want to sample from.
Random variable defined by CDF →
Since we are dealing with uniform distributions, then
, where .
Since where , .
Now, we supposed that is invertible. So we can define
.
So we have . Since Y and X have the same CDFs, they also define the same random variable.
Summary: This method generates a random variable that follows a given distribution. As shown above, we manipulated (inverse CDF) Y to follow the same distribution, and hence generate the same random variables, as X. In our case, the given distribution was the uniform random distribution.
Generating Random Variables using GANs
Let's explain this with an example. Suppose we want to generate nxn dog images. We can break this down into vectors. Each of these vectors will either fit the probability distribution of what a dog looks like or not. In other words, there exists a specific probability distribution for a dog image (same as there is a probability distribution for a bird or cat).
The Defined Problem: The problem of generating a dog image, means we need to generate a vector that fits the probability distribution for a dog. In other words, we need to generate a random variable with respect to the probability distribution of a dog.
Issues that will be addressed:
"dog probability distribution" is very complex over a large space
How do we explicitly express this dog probability distribution?
Last updated