CLT based inference, Pt 1


Today’s agenda

Today’s agenda

  • Central Limit Theorem

    • Aside: Evaluating normality graphically
  • Inference based on the Central Limit Theorem


  • Due Thursday: Read Sections 2.5 - 2.8 on OpenIntro: Intro Stat with Randomization and Simulation (http://www.openintro.org/isrs)

  • Due Next Tuesday: HW4

Notation

Notation

  • Means:
    • Population: mean = \(\mu\), standard deviation = \(\sigma\)
    • Sample: mean = \(\bar{x}\), standard deviation = \(s\)
  • Proportions:
    • Population: \(p\)
    • Sample: \(\hat{p}\)


  • Standard error: \(SE\)

Sample Statistics and Sampling Distributions

Variability of sample statistics

  • Each sample from the population yields a slightly different sample statistic (sample mean, sample proportion, etc.)

  • The variability of these sample statistics is measured by the standard error

  • Previously we quantified this value via simulation

  • Today we talk about the theory underlying sampling distributions

Sampling distribution

  • Sampling distribution is the distribution of sample statistics of random samples of size \(n\) taken from a population

  • In practice it is impossible to construct sampling distributions since it would require having access to the entire population

  • Today for demonstration purposes we will assume we have access to the population data, and construct sampling distributions, and examine their shapes, centers, and spreads

Evaluating normality: Normal probability plots

Normal probability plot

d = data.frame(norm_samp = rnorm(100, mean = 50, sd = 5))

ggplot(data = d, aes(sample = norm_samp)) +
  geom_point(alpha = 0.7, stat = "qq")

Anatomy of a normal probability plot

  • Data are plotted on the y-axis of a normal probability plot and theoretical quantiles (following a normal distribution) on the x-axis.

  • If there is a one-to-one relationship between the data and the theoretical quantiles, then the data follow a nearly normal distribution.

  • Since a one-to-one relationship would appear as a straight line on a scatter plot, the closer the points are to a perfect straight line, the more confident we can be that the data follow the normal model.

Constructing a normal probability plot

Data (y-coordinates) Percentile Theoretical Quantiles (x-coordinates)
37.5 0.5 / 100 = 0.005 qnorm(0.005) = -2.58
38.0 1.5 / 100 = 0.015 qnorm(0.015) = -2.17
38.3 2.5 / 100 = 0.025 qnorm(0.025) = -1.95
39.5 3.5 / 100 = 0.035 qnorm(0.035) = -1.81
61.9 99.5 / 100 = 0.995 qnorm(0.995) = 2.58

Constructing a normal probability plot

Fat tails

Best to think about what is happening with the most extreme values - here the biggest values are bigger than we would expect and the smallest values are smaller than we would expect (for a normal).

Skinny tails

Here the biggest values are smaller than we would expect and the smallest values are bigger than we would expect.

Right skew

Here the biggest values are bigger than we would expect and the smallest values are also bigger than we would expect.

Left skew

Here the biggest values are smaller than we would expect and the smallest values are also smaller than we would expect.

Central Limit Theorem

In practice…

We can’t directly know what the sampling distributions looks like, because we only draw a single sample.

  • The whole point of statistical inference is to deal with this issue: observe only one sample, try to make inference about the entire population

  • We have already seen that there are simulation based methods that help us derive the sampling distributiom

  • Additionally, there are theoretical results (Central Limit Theorem) that tell us what the sampling distribution should look like (for certain sample statistics)

Central Limit Theorem

If certain conditions are met (more on this in a bit), the sampling distribution of the sample statistic will be nearly normally distributed with mean equal to the population parameter and standard error proportional to the inverse of the square root of the sample size.

  • Single mean: \(\bar{x} \sim N\left(mean = \mu, sd = \frac{\sigma}{\sqrt{n}}\right)\)
  • Difference between two means: \((\bar{x}_1 - \bar{x}_2) \sim N\left(mean = (\mu_1 - \mu_2), sd = \sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}} \right)\)
  • Single proportion: \(\hat{p} \sim N\left(mean = p, sd = \sqrt{\frac{p (1-p)}{n}} \right)\)
  • Difference between two proportions: \((\hat{p}_1 - \hat{p}_2) \sim N\left(mean = (p_1 - p_2), sd = \sqrt{\frac{p_1 (1-p_1)}{n_1} + \frac{p_2(1-p_2)}{n_2}} \right)\)

Conditions:

  • Independence: The sampled observations must be independent. This is difficult to check, but the following are useful guidelines:
    • the sample must be random
    • if sampling without replacement, sample size must be less than 10% of the population size
  • Sample size / distribution:
    • numerical data: The more skewed the sample (and hence the population) distribution, the larger samples we need. Usually n > 30 is considered a large enough sample for population distributions that are not extremely skewed.
    • categorical data: At least 10 successes and 10 failures.
  • If comparing two populations, the groups must be independent of each other, and all conditions should be checked for both groups.

Standard Error

The standard error is the standard deviation of the sampling distribution.

  • Single mean: \(SE = \frac{\sigma}{\sqrt{n}}\)

  • Difference between two means: \(SE = \sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}\)

  • Single proportion: \(SE = \sqrt{\frac{p (1-p)}{n}}\)

  • Difference between two proportions: \(SE = \sqrt{\frac{p_1 (1-p_1)}{n_1} + \frac{p_2(1-p_2)}{n_2}}\)

Inference methods based on CLT

Inference methods based on CLT

If necessary conditions are met, we can also use inference methods based on the CLT:

  • use the CLT to calculate the SE of the sample statistic of interest (sample mean, sample proportion, difference between sample means, etc.)

  • calculate the test statistic, number of standard errors away from the null value the observed sample statistic is
    • T for means, along with appropriate degrees of freedom
    • Z for proportions
  • use the test statistic to calculate the p-value, the probability of an observed or more extreme outcome given that the null hypothesis is true

Z and T distributions

Z distribution

Also called the standard normal distribution: \(Z \sim N(mean = 0, \sigma = 1)\)


Finding probabilities under the normal curve:

pnorm(-1.96)
## [1] 0.0249979
pnorm(1.96, lower.tail = FALSE)
## [1] 0.0249979

Finding cutoff values under the normal curve:

qnorm(0.025)
## [1] -1.959964
qnorm(0.975)
## [1] 1.959964

T distribution

  • Also unimodal and symmetric, and centered at 0

  • Thicker tails than the normal distribution (to make up for additional variability introduced by using \(s\) instead of \(\sigma\) in calculation of the SE)

  • Parameter: degrees of freedom

    • df for single mean: \(df = n - 1\)

    • df for comparing two means:

\[df \approx \frac{(s_1^2/n_1+s_2^2/n_2)^2}{(s_1^2/n_1)^2/(n_1-1)+(s_2^2/n_2)^2/(n_2-1)} \approx min(n_1 - 1, n_2 - 1)\]

T vs Z distributions

T distribution (cont.)

Finding probabilities under the t curve:

pt(-1.96, df = 9)
## [1] 0.0408222
pt(1.96, df = 9, lower.tail = FALSE)
## [1] 0.0408222


Finding cutoff values under the normal curve:

qt(0.025, df = 9)
## [1] -2.262157
qt(0.975, df = 9)
## [1] 2.262157

Examples

General Social Survey

  • Since 1972, the General Social Survey (GSS) has been monitoring societal change and studying the growing complexity of American society.

  • The GSS aims to gather data on contemporary American society in order to
    • monitor and explain trends and constants in attitudes, behaviors, attributes;
    • examine the structure and functioning of society in general as well as the role played by relevant subgroups;
    • compare the US to other societies to place American society in comparative perspective and develop cross-national models of human society;
    • make high-quality data easily accessible to scholars, students, policy makers, and others, with minimal cost and waiting.
  • GSS questions cover a diverse range of issues including national spending priorities, marijuana use, crime and punishment, race relations, quality of life, confidence in institutions, and sexual behavior.

Data

2010 GSS:

gss = read.csv("https://stat.duke.edu/~mc301/data/gss2010.csv")


Inference for a single mean

Hypothesis testing for a mean

One of the questions on the survey is “After an average work day, about how many hours do you have to relax or pursue activities that you enjoy?”. Do these data provide convincing evidence that Americans, on average, spend more than 3 hours per day relaxing? Note that the variable of interest in the dataset is hrsrelax.

gss %>% 
  filter(!is.na(hrsrelax)) %>%
  summarise(mean(hrsrelax), median(hrsrelax), sd(hrsrelax), length(hrsrelax))
##   mean(hrsrelax) median(hrsrelax) sd(hrsrelax) length(hrsrelax)
## 1       3.680243                3     2.629641             1154

Exploratory Data Analysis

ggplot(data = gss, aes(x = hrsrelax)) + geom_histogram(binwidth = 1)

Hypotheses

What are the hypotheses for evaluation Americans, on average, spend more than 3 hours per day relaxing?

\[H_0: \mu = 3\] \[H_A: \mu > 3\]

Conditions

  1. Independence: The GSS uses a reasonably random sample, and the sample size of 1,154 is less than 10% of the US population, so we can assume that the respondents in this sample are independent of each other.

  2. Sample size / skew: The distribution of hours relaxed is right skewed, however the sample size is large enough for the sampling distribution to be nearly normal.

Calculating the test statistic

\[\bar{x} \sim N\left(mean = \mu, SE = \frac{\sigma}{\sqrt{n}}\right)\] \[ \frac{\bar{x}-\mu_0}{s/\sqrt{n}} \sim T_{df=n-1} \]


\[T_{df} = \frac{obs - null}{SE} = \frac{\bar{x}-\mu_0}{s}\] \[df = n - 1\]

# summary stats
hrsrelax_summ = gss %>% 
  filter(!is.na(hrsrelax)) %>%
  summarise(xbar = mean(hrsrelax), s = sd(hrsrelax), n = n())

# calculations
(se = hrsrelax_summ$s / sqrt(hrsrelax_summ$n))
## [1] 0.07740938
(t = (hrsrelax_summ$xbar - 3) / se)
## [1] 8.7876
(df = hrsrelax_summ$n - 1)
## [1] 1153

p-value

p-value = P(observed or more extreme outcome | \(H_0\) true)

pt(t, df, lower.tail = FALSE)
## [1] 2.720895e-18

Conclusion

  • Since the p-value is small, we reject \(H_0\).

  • The data provide convincing evidence that Americans, on average, spend more than 3 hours per day relaxing after work.

Would you expect a 90% confidence interval for the average number of hours Americans spend relaxing after work to include 3 hours?

Confidence interval for a mean

\[point~estimate \pm critical~value \times SE\]

t_star = qt(0.95, df)
pt_est = hrsrelax_summ$xbar
round(pt_est + c(-1,1) * t_star * se, 2)
## [1] 3.55 3.81

Interpret this interval in context of the data.

In R

# HT
t.test(gss$hrsrelax, mu = 3, alternative = "greater")
## 
##  One Sample t-test
## 
## data:  gss$hrsrelax
## t = 8.7876, df = 1153, p-value < 2.2e-16
## alternative hypothesis: true mean is greater than 3
## 95 percent confidence interval:
##  3.552813      Inf
## sample estimates:
## mean of x 
##  3.680243
# CI
t.test(gss$hrsrelax, conf.level = 0.90)$conf.int
## [1] 3.552813 3.807672
## attr(,"conf.level")
## [1] 0.9