Application exercise: proving the CLT via simulation
Inference based on the Central Limit Theorem
Due Thursday: Read Sections 2.5 - 2.8 on OpenIntro: Intro Stat with Randomization and Simulation: http://www.openintro.org/isrs
Each sample from the population yields a slightly different sample statistic (sample mean, sample proportion, etc.)
The variability of these sample statistics is measured by the standard error
Previously we quantified this value via simulation
Today we talk about the theory underlying sampling distributions
Sampling distribution is the distribution of sample statistics of random samples of size \(n\) taken from a population
In practice it is impossible to construct sampling distributions since it would require having access to the entire population
Today for demonstration purposes we will assume we have access to the population data, and construct sampling distributions, and examine their shapes, centers, and spreads
d <- data.frame(norm_samp = rnorm(100, mean = 50, sd = 5))
ggplot(data = d, aes(sample = norm_samp)) +
geom_point(alpha = 0.7, stat = "qq")Data are plotted on the y-axis of a normal probability plot and theoretical quantiles (following a normal distribution) on the x-axis.
If there is a one-to-one relationship between the data and the theoretical quantiles, then the data follow a nearly normal distribution.
Since a one-to-one relationship would appear as a straight line on a scatter plot, the closer the points are to a perfect straight line, the more confident we can be that the data follow the normal model.
| Data (y-coordinates) | Percentile | Theoretical Quantiles (x-coordinates) |
|---|---|---|
| 37.5 | 0.5 / 100 = 0.005 | qnorm(0.005) = -2.58 |
| 38.0 | 1.5 / 100 = 0.015 | qnorm(0.015) = -2.17 |
| 38.3 | 2.5 / 100 = 0.025 | qnorm(0.025) = -1.95 |
| 39.5 | 3.5 / 100 = 0.035 | qnorm(0.035) = -1.81 |
| … | … | … |
| 61.9 | 99.5 / 100 = 0.995 | qnorm(0.995) = 2.58 |
Best to think about what is happening with the most extreme values - here the biggest values are bigger than we would expect and the smallest values are smaller than we would expect (for a normal).
Here the biggest values are smaller than we would expect and the smallest values are bigger than we would expect.
Here the biggest values are bigger than we would expect and the smallest values are also bigger than we would expect.
Here the biggest values are smaller than we would expect and the smallest values are also smaller than we would expect.
See course website for details
If certain conditions are met, the sampling distribution of the sample statistic will be nearly normally distributed with mean equal to the population parameter and standard error equal inversely proportional to the sample size.
Confirm that your findings from the application exercise match up with what the CLT outlines.
If necessary conditions are met, we can also use inference methods based on the CLT:
use the CLT to calculate the SE of the sample statistic of interest (sample mean, sample proportion, difference between sample means, etc.)
use the test statistic to calculte the p-value, the probability of an observed or more extreme outcome given that the null hypothesis is true
Also called the standard normal distribution: \(Z \sim N(mean = 0, \sigma = 1)\)
Finding probabilities under the normal curve:
pnorm(-1.96)## [1] 0.0249979
pnorm(1.96, lower.tail = FALSE)## [1] 0.0249979
qnorm(0.025)## [1] -1.959964
qnorm(0.975)## [1] 1.959964
Also unimodal and symmetric, and centered at 0
Thicker tails than the normal distribution (to make up for additional variability introduced by using \(s\) instead of \(\sigma\) in calculation of the SE)
pt(-1.96, df = 9)## [1] 0.0408222
pt(1.96, df = 9, lower.tail = FALSE)## [1] 0.0408222
qt(0.025, df = 9)## [1] -2.262157
qt(0.975, df = 9)## [1] 2.262157
In 2001 the average GPA of students at Duke University was 3.37. This semester we surveyed 63 students in a statistics course about their GPAs. The mean was 3.58, and the standard deviation 0.53. A histogram of the data is shown below. Assuming that this sample is random and representative of all Duke students (bit of a leap of faith?), do these data provide convincing evidence that the average GPA of Duke students has changed over the last decade?
\(H_0: \mu = 3.37; H_A: \mu \ne 3.37\)
\(\bar{x} \sim N\left(mean = \mu = 3.37, SE = \frac{\sigma}{\sqrt{n}} = \frac{0.53}{\sqrt{63}} = 0.0668 \right)\)
\(T = \frac{3.58 - 3.37}{0.0668} \approx 3.14\), \(df = n - 1 = 63 - 2 = 62\)
mu <- 3.37
x_bar <- 3.58
s <- 0.53
n <- 63
t_obs <- (x_bar - mu) / (s / sqrt(n))
(1 - pt(t_obs, df = n - 1)) * 2## [1] 0.002550524
We now have been introduced to both simulation based and CLT based methods for statistical inference.
For most simulation based methods you wrote your own code, for CLT based methods we introduced some built in functions.
Take away message: If certain conditions are met CLT based methods may be used for statistical inference. To do so, we would need to know how the standard error is calculated for the given sample statistic of interest.
t.test
1 and 2): \(H_0: \mu_1 = \mu_2\)prop.test
1 and 2): \(H_0: p_1 = p_2\)