In this lecture we will illustrate MCMC sampling with the Cauchy prior as mixtures of g-priors and look at properties of estimators. To address problems for estimation with nearly singular matrices, we will introduce Ridge Regression.
Readings: Christensen Chapter 2 and Chapter 6, Appendix A & B as needed C
In this lecture we will continue with the Bayesian perspective for estimation in linear models, and discuss various recommendations of Conjugate Normal-Gamma priors and the resulting posterior distributions, highlighting the advantages and disadvantages of conjugate priors. We will discuss a special case of a conjugate prior, namely Zellner’s g-prior. Finally we will discuss the role of invariance in construction of default prior distributions and the resulting inference and prediction problems.