copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
Understanding the Evidence Lower Bound (ELBO) - Cross Validated The illustration seems to indicate that ELBO can be positive, so I am a little confused Now the main question: I understand ELBO is a lower bound of the log-likelihood so we want to make it as large as possible
maximum likelihood - ELBO - Jensen Inequality - Cross Validated ELBO is a quantity used to approximate the log marginal likelihood of observed data, after applying Jensen's inequality to the log likelihood leading to the fact that maximizing the ELBO with respect to the parameters of is equivalent to minimizing the KL-divergence from to Without this approximation, sampling before taking the log can readily introduce high variance for the expectation
Why does Variational Inference work? - Cross Validated ELBO is a lower bound, and only matches the true likelihood when the q-distribution encoder we choose equals to the true posterior distribution Are there any guarantees that maximizing ELBO indeed
Why is computing $\log p (x)$ difficult, but not the ELBO? What you're describing is also a legitimate strategy: it's importance sampling from the prior using the likelihood as the weights There may be numerical reasons why one works and the other doesn't in some cases, but I don't really know (hence comment, not answer) Can you write a little more about where you encountered this strategy for approximating the ELBO?