Scale parameter estimation: Let \(y\sim N_n(X\beta,\sigma^2 V)\) where \(X\) and \(V\) are known but \(\beta\) and \(\sigma^2\) are unknown. Let \(\hat\beta\) be the OLS estimator and \(\hat\beta_V\) be the GLS estimator.
- Let \(\hat\epsilon = y-X\hat\beta\). Find an unbiased estimator of \(\sigma^2\) that is a scalar multiple of \(\hat\epsilon^\top\hat\epsilon\).
- Let \(\hat\epsilon_V = y-X\hat\beta_V\). Find an unbiased estimator of \(\sigma^2\) that is a scalar multiple of \(\hat\epsilon_V^\top V^{-1} \hat\epsilon_V\)
- Do you think one of these estimators is generally better than the other?
Let \(y\sim N_n(X \beta , \sigma^2 V)\) where \(V\) is known. Find the MLE of \((\beta,\sigma^2)\), and its distribution.
Quadratic forms: Let \(U\in \mathbb R^{n\times n}\) be an orthogonal matrix so \(UU^\top = U^\top U = I_{n}\). Separate the columns of \(U\) into the three matrices \(U_1\in \mathbb R^{n\times p_1}\), \(U_2\in \mathbb R^{n\times p_2}\), \(U_3\in \mathbb R^{n\times p_3}\) where \(p_1+p_2+p_3= n\) and \(U\) is equal to \(U_1\), \(U_2\) and \(U_3\) column-binded together.
- Show that \(U_1 U_1^\top + U_2 U_2^\top + U_3 U_3^\top = I_{n}\).
- Let \(z\sim N_n(0, I_n)\). Find the distributions of \(x_k = z^\top U_kU_k^\top z\) for \(k\in\{1,2,3\}\), and show that \(x_1\), \(x_2\) and \(x_3\) are independent. Find the distribution of \(x_1+x_2+x_3\) and compare it to the distribution of \(x=z^\top z\).
Projecting out nuisance factors: Suppose we have the ordinary linear model \(y= W\alpha + X \beta + \epsilon\) where \(E[\epsilon]=0\), \(V[\epsilon]=\sigma^2 I\), and \(W\in \mathbb R^{n\times q}\) and \(X\in \mathbb R^{n\times p}\) are observed model matrices, such that the columns of \(W\) and \(X\) taken together are linearly independent. Suppose we are only interested in estimating \(\beta\).
- Let \(N\) be an orthonormal basis for the null space of \(C(W)\). Using this matrix, find a transformation \(\tilde y\) of \(y\) so that the expectation of \(\tilde y\) depends on \(X\) but not \(W\). Write out the linear model for \(\tilde y\), and find the OLS estimator \(\hat\beta_N\) of \(\beta\) under this model in terms of \(y\), \(X\) and \(N\) (or \(I-P\)).
- Let \((\hat\alpha,\hat\beta)\) be the OLS estimator of \((\alpha,\beta)\) for the full linear model \(E[y]= W\alpha+ X\beta\). Using results on partitioned matrices, show that \(\hat\beta_N = \hat\beta\).
- Obtain a form for the usual unbiased estimator of \(\sigma^2\) using the model in part a, and also for the model in part b, and show they are the same.