AI, Deep Learning Basics/Methodology

[Generative Model] Latent Variable model

 

 

이 글은 Generative model 에 대한 필자의 이해를 높이고자 작성된 글입니다. 참고자료는 자료1 입니다.

Latent variable model(LVM)

  • defines a distribution over observation $x$ by using a (vector) latent variable $z$(: as an explanation for the observation)and specifying:
    • The prior distribution $p(z)$ for the latent variable
    • The likelihood $p(x|z)$ that connects the latent variable to the observation
    • The joint distribution $p(x, z) = p(x|z)p(z)$

We are interested in computing the marginal likelihood $p(x)$ and the posterior posterior distribution $p(z|x)$. This comes up with generating a latent variable which contains the observation from the model, and it follows: 

$z \sim p(z), x \sim p(x|z)$

From going from observations to latent values $p(z|x)$, which is called inference.

$p(z|x) = \frac{p(x,z)}{p(x)} = \frac{p(x,z)}{\int p(x,z) dz}$

So we have to compute marginal likelihood $p(x)=\int p(x,z)dz$, since we cannot evaluate this anatically, we take a detour

$p(x|z)p(z) = p(x,z) = p(z|x)p(x)$

As computing $p(z|x)$ as inverting $p(x|z)$ probabilistically.  But this is intractable, to avoid intractable inference:

  1. Invertible model/Normalizing flows
    • Designing models for which inferece is tractable
    • Key idea: approximate the data distribution by transforming the prior distribution using an invertible function
    • Simpler training but less expressive models
  2. Using approximate inference
    1. Markov Chain monte Carlo: generate samples from the exact posterior using a Markov Chain
    2. Variational Inference: approximate the posterior $p_\theta(z|x)$ with a tractable distribution$q_\phi (z|x)$