**Use of Sampling Methods in Bayesian Inference**

by Christel Faes (10:45 and 12:00)

Sampling is the most commonly used approach for Bayesian estimation of parameters in a model. A brief tour of different sampling techniques will be provided. Sampling from the posterior distribution is very powerful. We will start with the concept of Monte Carlo sampling using independent sampling for univariate and multivariate posterior distributions. Then, an introduction to Markov chain Monte Carlo methods is given, which is a type of dependent sampling.

**Variational inference: from basics to modern applications**

by Ádám Arany (14:00)

Bayesian inference, the task to compute the posterior distribution P(H|E) over variables of interest (H) conditioned on observations on evidence variables (E) requires the evaluation of an often extremely high dimensional integral. Except for the simplest models, this integral is intractable. Several approximation methods were developed to address this problem. Variational inference, one of these methods, based on the calculus of variation: at inference time an approximate distribution is searched for by minimizing the distance from the intractable exact posterior. In deep learning applications we usually apply gradient based optimization techniques: we impose a parametric form on the approximate posterior, therefore transforming the problem to a regular optimization problem.

After a brief recap of the basic concepts from my previous talk on Bayesian Inference, I will introduce the motivation of Variational inference, illustrate the derivation of the Evidence Lower Bound (ELBO), and discuss the applications of the method in Deep Learning (Variational Autoencoders, Bayesian Deep Learning).