class: center, middle, inverse, title-slide # Cross-Validation ### Yue Jiang ### STA 210 / Duke University / Spring 2023 --- ### Review: Multiple regression The linear regression model is given by `\begin{align*} y_i &= \beta_0 + \beta_1x_{i1} + \cdots + \beta_px_{ip} + \epsilon_i,\\ \epsilon_i &\stackrel{i.i.d.}{\sim} N(0, \sigma^2) \end{align*}` Note the variability at the individual level. When using this model, we think about relationships between the predictors and the .vocab[conditional expectation] of the response: `\begin{align*} E(y | x_1, \cdots, x_p) &= \beta_0 + \beta_1x_1 + \cdots + \beta_px_p \end{align*}` (familiar visualization on next page...) --- ### Conditional expectation of the response <!-- --> --- ### Comparing models? .question[ Suppose you had two models (that aimed to do the same thing). In what sorts of ways might you compare the models? How might one model be "better" (whatever this means) than another? ] --- ### A model from last class... `\begin{align*} \widehat{\log_{10}(price)}_i = 2.34 + 1.78~carat_i - 0.44(carat_i)^2 \end{align*}` .question[ We know that this model has a *horrible* interpretation in terms of clearly explaining how carat and price might relate to each other. However, it *did* do pretty well in terms of fitting the data (and thus in terms of prediction). How might we quantify how well our model is doing in terms of prediction? ] --- ### Loss functions Remember that we are predicting the outcome of each of our outcome variables `\(y_i\)`, with a *linear* prediction made from some model `\(\hat{\beta}_0 + \hat{\beta}_1x_i\)` But how do we tell whether we've made a "good" prediction or not? What do we count as good? Keep in mind the notion of the **residual**: `\begin{align*} \hat{\epsilon}_i &= y_i - \left(\hat{\beta}_0 + \hat{\beta}_1x_i\right) \\ &= y_i - \hat{y}_i \end{align*}` --- ### Some loss functions What might be some candidate loss functions? `\begin{align*} f(y_1, \cdots, y_n, \hat{y}_1, \cdots, \hat{y}_n) &= \frac{1}{n} \sum_{i = 1}^n \mathbf{1}_{\hat{y}_i \neq 17} &?\\ f(y_1, \cdots, y_n, \hat{y}_1, \cdots, \hat{y}_n) &= \frac{1}{n} \sum_{i = 1}^n \mathbf{1}_{y_i \neq \hat{y}_i} &?\\ f(y_1, \cdots, y_n, \hat{y}_1, \cdots, \hat{y}_n) &= \frac{1}{n} \sum_{i = 1}^n \Big | y_i - \hat{y}_i \Big | &?\\ f(y_1, \cdots, y_n, \hat{y}_1, \cdots, \hat{y}_n) &= \frac{1}{n} \sum_{i = 1}^n \left(y_i - \hat{y}_i\right)^2 &?\\ f(y_1, \cdots, y_n, \hat{y}_1, \cdots, \hat{y}_n) &= \frac{1}{n} \sum_{i = 1}^n \log \left( \frac{1 + \exp(2(y_i - \hat{y}_i))}{2\exp(y_i - \hat{y}_i)} \right) &? \end{align*}` --- ### The Mean Squared Error .vocab[Mean squared error] (MSE) of our predictions is given by `\begin{align*} MSE = \frac{1}{n}\sum_{i = 1}^n (y_i - \widehat{y}_i)^2 \end{align*}` .question[ In plain English, what is the mean squared error? Why does this type of loss function make sense? Should we always choose the model that minimizes the MSE? ] --- ### What about this model? <img src="cross-validation_files/figure-html/unnamed-chunk-3-1.png" width="90%" style="display: block; margin: auto;" /> --- ### What about this model? <img src="cross-validation_files/figure-html/unnamed-chunk-4-1.png" width="90%" style="display: block; margin: auto;" /> --- ### Comparing the two models .question[ Which model has the lower MSE? Which would you rather use for interpretation? Which would you rather use for prediction? Why? ] --- ### What about this model? <img src="cross-validation_files/figure-html/unnamed-chunk-5-1.png" width="90%" style="display: block; margin: auto;" /> --- ### What went wrong? --- ### Cross-validation .vocab[Cross-validation] is a concept that revolves around how well our model performs *outside* of the data that it was fit on. It's generally used in **prediction** settings to assess how well our model might work in a set of "previously unseen" data. In cross-validation techniques, a model is first fit on a .vocab[training] set, and then is evaluated on an "independent" .vocab[testing] set that was never. used to fit any of the models. The goal of these types of methods is often to avoid .vocab[overfitting], which might occur, for instance, if we're adding too many variables, etc. --- ### Cross-validation .pull-left[ .vocab[Exhaustive] splits - Leave `\(p\)` out cross-validation: find all possible ways to create *testing* sets of size `\(p\)` - Leave one out cross-validation: same as above, but with an testing set of size `\(p = 1\)` ] .pull-right[ .vocab[Non-exhaustive] splits - Simple holdout cross-validation: only create one testing split - Monte Carlo cross-validation: same as above, but done multiple times with random testing sets each time - `\(k\)`-fold cross-validation: randomly split the dataset into `\(k\)` mutually exclusive but collective exhaustive subsets; use each of these `\(k\)` subsets as a test set ] .centering[**See board**] --- ### Cross-validation .question[ What are some of the advantages and disadvantages of each of these methods? ] You might see a bunch of lingo out there, but all of these methods are similar in spirit in using independent testing sets and trying to maximize predictive power while avoiding overfitting in unseen data --- ### Some candidate models ```r m1 <- lm(log(price) ~ color + clarity + carat, data = diamonds) m2 <- lm(log(price) ~ color + clarity + carat + cut, data = diamonds) m1_aug <- augment(m1) m2_aug <- augment(m2) ``` --- ### Some candidate models ```r rmse(m1_aug, truth = `log(price)`, estimate = .fitted) ``` ``` ## # A tibble: 1 x 3 ## .metric .estimator .estimate ## <chr> <chr> <dbl> ## 1 rmse standard 0.319 ``` ```r rmse(m2_aug, truth = `log(price)`, estimate = .fitted) ``` ``` ## # A tibble: 1 x 3 ## .metric .estimator .estimate ## <chr> <chr> <dbl> ## 1 rmse standard 0.319 ``` --- ### Simple holdout method ```r set.seed(123) dim(diamonds) ``` ``` ## [1] 2000 10 ``` ```r indices <- sample(1:2000, size = 2000 * 0.8, replace = F) train.data <- diamonds %>% slice(indices) test.data <- diamonds %>% slice(-indices) dim(train.data) ``` ``` ## [1] 1600 10 ``` ```r dim(test.data) ``` ``` ## [1] 400 10 ``` --- ### Simple holdout method ```r m1 <- lm(log(price) ~ color + clarity + carat, data = train.data) m2 <- lm(log(price) ~ color + clarity + carat + cut, data = train.data) m1_pred <- predict(m1, test.data) m2_pred <- predict(m2, test.data) ``` --- ### Simple holdout method ```r head(m1_pred) ``` ``` ## 1 2 3 4 5 6 ## 6.945904 7.315938 10.278735 8.451798 7.609596 8.781945 ``` ```r head(log(test.data$price)) ``` ``` ## [1] 6.569481 7.419381 9.765489 8.620832 7.649693 8.892062 ``` ```r head(log(test.data$price) - m1_pred) ``` ``` ## 1 2 3 4 5 6 ## -0.37642278 0.10344306 -0.51324546 0.16903378 0.04009678 0.11011630 ``` --- ### Simple holdout method ```r sqrt( mean( (log(test.data$price) - m1_pred)^2 ) ) ``` ``` ## [1] 0.3188945 ``` ```r sqrt( mean( (log(test.data$price) - m2_pred)^2 ) ) ``` ``` ## [1] 0.3191523 ``` --- ### Leave one out CV ```r library(caret) cv_method <- trainControl(method = "LOOCV") m1 <- train(log(price) ~ color + clarity + carat, data = diamonds, method = "lm", trControl = cv_method) m2 <- train(log(price) ~ color + clarity + carat + cut, data = diamonds, method = "lm", trControl = cv_method) ``` --- ### Leave one out CV ```r print(m1) ``` ``` ## Linear Regression ## ## 2000 samples ## 3 predictor ## ## No pre-processing ## Resampling: Leave-One-Out Cross-Validation ## Summary of sample sizes: 1999, 1999, 1999, 1999, 1999, 1999, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 0.3218101 0.899073 0.2599859 ## ## Tuning parameter 'intercept' was held constant at a value of TRUE ``` --- ### Leave one out CV ```r print(m2) ``` ``` ## Linear Regression ## ## 2000 samples ## 4 predictor ## ## No pre-processing ## Resampling: Leave-One-Out Cross-Validation ## Summary of sample sizes: 1999, 1999, 1999, 1999, 1999, 1999, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 0.3220167 0.8989444 0.2605107 ## ## Tuning parameter 'intercept' was held constant at a value of TRUE ``` --- ### Leave `\(p\)` out CV ...we're not going to do this because of computational costs. For instance, for `\(p = 2\)`, there are almost 2 million ways to create unique test sets; for `\(p = 3\)`, there are over 1.3 billion ways to do so (given our 2,000 observations). --- ### K-fold CV ```r library(caret) set.seed(123) cv_method <- trainControl(method = "cv", number = 10) m1 <- train(log(price) ~ color + clarity + carat, data = diamonds, method = "lm", trControl = cv_method) m2 <- train(log(price) ~ color + clarity + carat + cut, data = diamonds, method = "lm", trControl = cv_method) ``` --- ### K-fold CV ```r print(m1) ``` ``` ## Linear Regression ## ## 2000 samples ## 3 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 1800, 1800, 1800, 1800, 1800, 1800, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 0.321648 0.9010039 0.2599949 ## ## Tuning parameter 'intercept' was held constant at a value of TRUE ``` --- ### K-fold CV ```r print(m2) ``` ``` ## Linear Regression ## ## 2000 samples ## 4 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 1800, 1800, 1800, 1800, 1800, 1800, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 0.3210479 0.9008995 0.2601354 ## ## Tuning parameter 'intercept' was held constant at a value of TRUE ``` --- ### Repeated K-fold CV ```r library(caret) set.seed(123) cv_method <- trainControl(method = "cv", number = 10, repeats = 5) m1 <- train(log(price) ~ color + clarity + carat, data = diamonds, method = "lm", trControl = cv_method) m2 <- train(log(price) ~ color + clarity + carat + cut, data = diamonds, method = "lm", trControl = cv_method) ``` --- ### Repeated k-fold CV ```r print(m1) ``` ``` ## Linear Regression ## ## 2000 samples ## 3 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 1800, 1800, 1800, 1800, 1800, 1800, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 0.321648 0.9010039 0.2599949 ## ## Tuning parameter 'intercept' was held constant at a value of TRUE ``` --- ### Repeated k-fold CV ```r print(m2) ``` ``` ## Linear Regression ## ## 2000 samples ## 4 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 1800, 1800, 1800, 1800, 1800, 1800, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 0.3210479 0.9008995 0.2601354 ## ## Tuning parameter 'intercept' was held constant at a value of TRUE ``` --- ### Comparing our models ```r table(diamonds$cut) ``` ``` ## ## Fair Good Very Good Premium Ideal ## 61 199 452 505 783 ``` .question[ How many more model parameters have to be estimated? Given what we've seen, which of these two models might you use? In what settings? ]