Exercises for Friday September 28

  • On Friday 21th of September we started on the theme "Bayesian computation", specifically we went through parts of Chapter 10 (Importance and Rejection sampling) and started on Chapter 11 (MCMC). I also went through Nils Exercise 14 b) and the "Small mixture prior exercise".
  • Next week we will continue with MCMC topics: a bit of theory, and some specific version (Metropolis, Metropolis-Hastings and Gibbs).
  • Note the R-scripts from the "Small mixture prior exercise" and Nils Ex 9 e).
  • Exercises: 
    1. Monte Carlo exercise I: find some data (n=20 observations) here. Assume the data come from a normal distribution with unknown mean \theta and known variance (\sigma^2=1). Use a normal prior for \theta, N(\mu_0, \tau_0^2), with \mu_0=1 and \tau_0=1. Then we have an explicit posterior, with explicit expressions for all quantities of interest (here: the posterior mean, the posterior standard deviation and the 90% posterior interval for \theta), but we will pretend that we do not know the full posterior, but can only evaluate the unnormalised posterior (prior times likelihood). Compute all three quantities of interest (and the a histogram of posterior draws when possible) using a) Importance sampling; b) Rejection sampling and c) MCMC (use for example a variant of the simple Metropolis scheme we saw in the lecture). Compare the estimated quantities of interest with the true values (and the histogram of draws with the true posterior density).
    2. Monte Carlo exercise II: use the same data as above, but assume a different model for the data, a t-distribution with 3 degrees of freedom instead: y_i = \theta + \sigma/sqrt(3)*\esp_i, where \eps_i \sim t_3. The variance is still assumed known (\sigma=1). Use the same prior as before, and compute the same quantities as in the previous exercise with the three different Monte Carlo methods (but here we cannot easily compare against the true values like in the previous exercise).
    3. MCMC exercise: attempt to set up an MCMC to simulate from the target density of type p(theta) = 0.25*exp(-abs(theta-a0)) + 0.25*exp(-abs(theta+a0)) with a0 = 2.00. Do this by proposals theta[i+1] being a uniform on [theta[i] - b0, theta[i] + b0], where you try out a decent b0.

 

Published Sep. 21, 2018 3:30 PM - Last modified Sep. 26, 2018 9:59 AM