Variational inference methods are very useful in the analysis of large datasets. The key idea of such methods for Bayesian inference is to reformulate the problem of approximating a posterior distribution as an optimization problem. Recent progress in the area has been concerned with the application of stochastic gradient methods to the optimization problems that arise.
Amortized variational inference methods attempt to use data adaptive parametrizations of variational posterior distributions to obtain very parsimonious but accurate representations.The reduced parametrizations that arise ease some of the difficulties involved in the numerical optimization.
The purpose of this project is to exploit amortized variational inference methods in the context of Bayesian model criticism. Here a need often arises to compute a posterior distribution many times for many different datasets; amortized inference makes these difficult computations feasible.
The methodological ideas will be developed in the context of outlier detection for clustered data using random effects models, and fitting of time varying parameter versions of computationally expensive computer models, where inference about the time varying parameters is insightful about model deficiencies.
|David Nott||Jo Seongil|