People providing an organ for donation sometimes seek the help of a special “medical consultant”. These consultants assist the patient in all aspects of the surgery, with the goal of reducing the possibility of complications during the medical procedure and recovery. Patients might choose a consultant based in part on the historical complication rate of the consultant’s clients.
One consultant tried to attract patients by noting that the average complication rate for liver donor surgeries in the US is about 10%, but her clients have only had 3 complications in the 62 liver donor surgeries she has facilitated. She claims this is strong evidence that her work meaningfully contributes to reducing complications (and therefore she should be hired!).
A parameter for a hypothesis test is the “true” value of interest. We typically estimate the parameter using a sample statistic as a point estimate.
\(p\): true rate of complication
\(\hat{p}\): rate of complication in the sample = \(\frac{3}{62}\) = 0.048
No. The claim is that there is a causal connection, but the data are observational. For example, maybe patients who can afford a medical consultant can afford better medical care, which can also lead to a lower complication rate.
While it is not possible to assess the causal claim, it is still possible to test for an association using these data. For this question we ask, could the low complication rate of \(\hat{p}\) = 0.048 be due to chance?
Complication rate for this consultant is no different than the US average of 10%
Complication rate for this consultant is lower than the US average of 10%
Null hypothesis, \(H_0\): Defendant is innocent
Alternative hypothesis, \(H_A\): Defendant is guilty
Present the evidence: Collect data
Start with a null hypothesis (\(H_0\)) that represents the status quo
Set an alternative hypothesis (\(H_A\)) that represents the research question, i.e. what we’re testing for
each person in the study can be thought of as a trial
when an individual trial has only two possible outcomes, it is called a Bernoulli random variable
4 possible scenarios:
[C] - [NC] - [NC] - [NC] = \(0.1 \times 0.9 \times 0.9 \times 0.9\) = 0.073
[NC] - [C] - [NC] - [NC] = \(0.9 \times 0.1 \times 0.9 \times 0.9\) = 0.073
[NC] - [NC] - [C] - [NC] = \(0.9 \times 0.9 \times 0.1 \times 0.9\) = 0.073
[NC] - [NC] - [NC] - [C] = \(0.9 \times 0.0 \times 0.9 \times 0.1\) = 0.073
Total: 4 \(\times\) 0.073 = 0.292
The binomial distribution describes the probability of having exactly k successes in n independent Bernouilli trials with probability of success p
P(k successes in n trials) = # of scenarios x P(one scenario)
\[ = {n \choose k} p^k (1-p)^{n - k} \]
where
\[ {n \choose k} = \frac{n!}{k!(n-k)!} \]
dbinom(x = 1, size = 4, prob = 0.1)
## [1] 0.2916
dbinom(x = 1:4, size = 4, prob = 0.1)
## [1] 0.2916 0.0486 0.0036 0.0001
sum(dbinom(x = 1:4, size = 4, prob = 0.1))
## [1] 0.3439
1 - pbinom(q = 0, size = 4, prob = 0.1)
## [1] 0.3439
p-value = P(observed or more extreme outcome | \(H_0\) true)
= P(3 or fewer complications | \(p = 0.10\))
(p_val = sum(dbinom(x = 0:3, size = 62, prob = 0.1)))
## [1] 0.121
We often use 5% as the cutoff for whether the p-value is low enough that the data are unlikely to have come from the null model. This cutoff value is called the significance level (\(\alpha\)).
If p-value < \(\alpha\), reject \(H_0\) in favor of \(H_A\): The data provide convincing evidence for the alternative hypothesis.
If p-value > \(\alpha\), fail to reject \(H_0\) in favor of \(H_A\): The data do not provide convincing evidence for the alternative hypothesis.
Since the p-value is greater than the significance level (0.121 > 0.05), we fail to reject the null hypothesis. These data do not provide convincing evidence that this consultant incurs a lower complication rate than 10% (overall US complication rate).
Instead of constructing the exact null distribution using the binomial, we can also simulate it.
Remember that \(H_0: p = 0.10\), so we need to simulate a null distribution where the probability of success (complication) for each trial (patient) is 0.10.
set.seed(9)
library(ggplot2)
# create sample space
chips = c("red", "blue")
# draw the first sample of size 62 from the null distribution
sim1 = sample(chips, size = 62, prob = c(0.1, 0.9), replace = TRUE)
# view the sample
table(sim1)
## sim1
## blue red
## 51 11
# calculate the simulated sample proportion of complications (red chips)
(p_hat_sim1 = sum(sim1 == "red") / length(sim1))
## [1] 0.1774
sim_dist = data.frame(p_hat_sim = rep(NA, 100))
sim_dist$p_hat_sim[1] = p_hat_sim1
ggplot(sim_dist, aes(x = p_hat_sim)) +
geom_dotplot() +
xlim(0,0.26) + ylim(0,10)
sim2 = sample(chips, size = 62,
prob = c(0.1, 0.9), replace = TRUE)
(p_hat_sim2 = sum(sim2 == "red") / length(sim2))
## [1] 0.08065
sim_dist$p_hat_sim[2] = p_hat_sim2
ggplot(sim_dist, aes(x = p_hat_sim)) +
geom_dotplot() +
xlim(0,0.26) + ylim(0,10)
sim3 = sample(chips, size = 62,
prob = c(0.1, 0.9), replace = TRUE)
(p_hat_sim3 = sum(sim3 == "red") / length(sim3))
## [1] 0.2097
sim_dist$p_hat_sim[3] = p_hat_sim3
ggplot(sim_dist, aes(x = p_hat_sim)) +
geom_dotplot() +
xlim(0,0.26) + ylim(0,10)
Application exercise 6:
Automate the process of constucting the simulated null distribution using 100 simulations. Plot the distribution using a stacked dot plot, and calculate the p-value two ways. First, counting the dots on the plot, and then using R and subsetting.
Challenge: Your code should have as few hard coded arguments as possible. The ultimate goal is to be able to re-use the code with little modification for another dataset/hypothesis test.