Variational Inference
What is Variational Inference
= a method in Bayesian statistics and machine learning for approximating complex probability distributions:
- Instead of trying to directly calculate a complicated posterior distribution
(often intractable), we pick a simpler family of distributions and try to find the one that is closest to the true posterior.
Why use Variational Inference
- the Bayesian context: the marginal likelihood
, which is needed for computing the posterior via Bayesian inference, is often intractable (cannot compute exactly) - Variational Inference can turn inference into optimization.
How it works
- Choose a family of approximating distributions
(e.g., Gaussian with mean and variance parameters) - Optimize parameters
to maximize the Evidence Lower Bound (ELBO):
- Use gradient-based methods for optimization.