1. Let \(Y\sim N_{n\times p}( 1\mu^\top , \Sigma\otimes I )\) and let \((\bar y,\hat\Sigma)\) be the sample mean and variance.

    1. Using stochastic representations or otherwise, show that the distribution of \((\bar y-\mu)^\top \Sigma^{-1} (\bar y-\mu)\) does not depend on \(\mu\) or \(\Sigma\) (it is a pivotal quantity). Describe how this result could be used to test the hypothesis that \((\mu, \Sigma)\) are equal to some specific numerical values, say \((\mu_0,\Sigma_0)\).
    2. show that the distribution of \((\bar y-\mu)^\top \hat\Sigma^{-1} (\bar y-\mu)\) does not depend on \(\mu\) or \(\Sigma\). Describe how this result could be used to test the hypothesis that \(\mu=\mu_0\).
  2. Let \(Y\sim N_{n\times p}( 1\mu^\top , \Sigma\otimes I )\) and let \(\mu \sim N_p(m_0,V_0)\).

    1. Calculate the posterior distribution of \(\mu| Y,\Sigma\).
    2. Find the Bayes estimator of \(\mu\) under squared error loss \(L(\hat{\mu},\mu)=||\hat\mu-\mu||^2\).
    3. Find the Bayes estimator of \(\mu\) under equivariant loss \(L(\hat{\mu} ,\mu)= (\hat\mu - \mu)^\top \Sigma^{-1} (\hat\mu-\mu)\).
  3. Let \(y\sim N_p(\mu, I_p )\) where \(p=3\). Recall from class that the Bayes estimator under the prior distribution \(\mu\sim N(0,\lambda I)\) is \[ \hat\mu_\lambda = (1/\lambda +1)^{-1} y = (1-\frac{1}{\lambda+1}) y. \] Using a simulation study or otherwise, compute (approximately) the risk of \[ \hat\mu_{\hat\lambda} = (1-\frac{1}{\hat\lambda+1}) y. \] and of \[ \hat\mu_{JS} = (1-\widehat{\frac{1}{\lambda+1}}) y \] where \(\hat\lambda\) is an unbiased estimator of \(\lambda\) under the marginal distribution of \(y\), and \(\hat\mu_{JS}\) is the James-Stein estimator. Plot the risk as a function of \(||\mu||^2\) and compare to that of the unbiased estimator \(y\).

  4. Consider estimating \(\Sigma\) based on observing \(S\sim W_{n}(\Sigma)\) for some unknown \(\Sigma\in \mathcal S_p^+\).

    1. Show that the model is GL-invariant, that is, invariant under transformations of the form \(gS = ASA^\top\) for some non-singular \(p\times p\) matrix \(A\).
    2. Show that the loss function \(L(\hat\Sigma,\Sigma) = \text{tr}(\hat\Sigma\Sigma^{-1}) - \log |\hat\Sigma \Sigma^{-1}|\) is invariant, in that \(L(A\hat\Sigma A^\top, A\Sigma A^\top)= L(\hat\Sigma,\Sigma)\) for all such \(A\).
    3. Show that if \(\hat\Sigma\) is GL-equivariant, then \(\hat\Sigma=cS\) for some \(c>0\).