Lab 07 - Modelling course evaluations, Pt 2

Multiple predictors

2018-03-01

Due: 2018-03-08 at noon

Introduction

In this lab we revisit the professor evaluations data we modeled in the previous lab. In the last lab we modeled evaluation scores using a single predictor at a time. However this time we use multiple predictors to model evaluation scores.

If you don’t remember the data, review the previous lab’s introduction before continuing to the exercises.

Getting started

Packages

In this lab we will work with the tidyverse and broom packages. We can install and load them with the following:

When you install the tidyverse package, a long list of packages get installed with it. However when you load it (with the library function) only a few of them get loaded, e.g. dplyr, ggplot2, and forcats. The broom package is installed with the tidyverse, but we need to load it separately in order to make use of it.

install.packages("tidyverse")
library(tidyverse) 
library(broom)

Housekeeping

Git configuration / password caching

Configure your Git user name and email. If you cannot remember the instructions, refer to an earlier lab. Also remember that you can cache your password for a limited amount of time.

Project name:

Update the name of your project to match the lab’s title.

Warm up

Pick one team member to complete the steps in this section while the others contribute to the discussion but do not actually touch the files on their computer.

Before we introduce the data, let’s warm up with some simple exercises.

YAML:

Open the R Markdown (Rmd) file in your project, change the author name to your team name, and knit the document.

Commiting and pushing changes:

Pulling changes:

Now, the remaining team members who have not been concurrently making these changes on their projects should click on the Pull button in their Git pane and observe that the changes are now reflected on their projects as well.

The data

In this lab you will first download the data, then upload it to the data/ folder in your RStudio Cloud project.

Exercises

  1. Load the data by including the appropriate code in your R Markdown file.

Part 1: Simple linear regression

  1. Fit a linear model (one you have fit before): m_bty, predicting average professor evaluation score based on average beauty rating (bty_avg) only. Write the linear model, and note the \(R^2\) and the adjusted \(R^2\).

Part 2: Multiple linear regression

  1. Fit a linear model (one you have fit before): m_bty_gen, predicting average professor evaluation score based on average beauty rating (bty_avg) and gender. Write the linear model, and note the \(R^2\) and the adjusted \(R^2\).

  2. Interpret the slope and intercept of m_bty_gen in context of the data.

  3. What percent of the variability in score is explained by the model m_bty_gen.

  4. What is the equation of the line corresponding to just male professors?

  5. For two professors who received the same beauty rating, which gender tends to have the higher course evaluation score?

  6. How does the relationship between beauty and evaluation score vary between male and female professors?

  7. How do the adjusted \(R^2\) values of m_bty_gen and m_bty compare? What does this tell us about how useful gender is in explaining the variability in evaluation scores when we already have information on the beaty score of the professor.

  8. Compare the slopes of bty_avg under the two models (m_bty and m_bty_gen). Has the addition of gender to the model changed the parameter estimate (slope) for bty_avg?

  9. Create a new model called m_bty_rank with gender removed and rank added in. Write the equation of the linear model and interpret the slopes and intercept in context of the data.

Part 5: The search for the best model

Going forward, only consider the following variables as potential predictors: rank, ethnicity, gender, language, age, cls_perc_eval, cls_did_eval, cls_students, cls_level, cls_profs, cls_credits, bty_avg.

  1. Which variable, on its own, would you expect to be the worst predictor of evaluation scores? Why? Hint: Think about which variable would you expect to not have any association with the professor’s score.

  2. Check your suspicions from the previous exercise. Include the model output for that variable in your response.

  3. Suppose you wanted to fit a full model with the variables listed above. If you are already going to include cls_perc_eval and cls_students, which variable should you not include as an additional predictor? Why?

  4. Fit a full model with all predictors listed above (except for the one you decided to exclude) in the previous question.

  5. Using backward-selection with adjusted R-squared as the selection criterion, determine the best model. You do not need to show all steps in your answer, just the output for the final model. Also, write out the linear model for predicting score based on the final model you settle on.

  6. Interpret the slopes of one numerical and one categorical predictor based on your final model.

  7. Based on your final model, describe the characteristics of a professor and course at University of Texas at Austin that would be associated with a high evaluation score.

  8. Would you be comfortable generalizing your conclusions to apply to professors generally (at any university)? Why or why not?