Due: 2018-03-08 at noon
In this lab we revisit the professor evaluations data we modeled in the previous lab. In the last lab we modeled evaluation scores using a single predictor at a time. However this time we use multiple predictors to model evaluation scores.
If you don’t remember the data, review the previous lab’s introduction before continuing to the exercises.
Go to the course organization on GitHub: https://github.com/Sta199-S18.
Find the repo starting with lab-07
and that has your team name at the end (this should be the only lab-07
repo available to you).
In the repo, click on the green Clone or download button, select Use HTTPS (this might already be selected by default, and if it is, you’ll see the text Clone with HTTPS as in the image below). Click on the clipboard icon to copy the repo URL.
Go to RStudio Cloud and into the course workspace. Create a New Project from Git Repo. You will need to click on the down arrow next to the New Project button to see this option.
Copy and paste the URL of your assignment repo into the dialog box:
Hit OK, and you’re good to go!
In this lab we will work with the tidyverse
and broom
packages. We can install and load them with the following:
When you install the tidyverse package, a long list of packages get installed with it. However when you load it (with the library
function) only a few of them get loaded, e.g. dplyr
, ggplot2
, and forcats
. The broom package is installed with the tidyverse, but we need to load it separately in order to make use of it.
install.packages("tidyverse")
library(tidyverse)
library(broom)
Configure your Git user name and email. If you cannot remember the instructions, refer to an earlier lab. Also remember that you can cache your password for a limited amount of time.
Update the name of your project to match the lab’s title.
Pick one team member to complete the steps in this section while the others contribute to the discussion but do not actually touch the files on their computer.
Before we introduce the data, let’s warm up with some simple exercises.
Open the R Markdown (Rmd) file in your project, change the author name to your team name, and knit the document.
Now, the remaining team members who have not been concurrently making these changes on their projects should click on the Pull button in their Git pane and observe that the changes are now reflected on their projects as well.
In this lab you will first download the data, then upload it to the data/
folder in your RStudio Cloud project.
evals-mod.csv
.evals-mod.csv
file.m_bty
, predicting average professor evaluation score
based on average beauty rating (bty_avg
) only. Write the linear model, and note the \(R^2\) and the adjusted \(R^2\).Fit a linear model (one you have fit before): m_bty_gen
, predicting average professor evaluation score
based on average beauty rating (bty_avg
) and gender
. Write the linear model, and note the \(R^2\) and the adjusted \(R^2\).
Interpret the slope and intercept of m_bty_gen
in context of the data.
What percent of the variability in score
is explained by the model m_bty_gen
.
What is the equation of the line corresponding to just male professors?
For two professors who received the same beauty rating, which gender tends to have the higher course evaluation score?
How does the relationship between beauty and evaluation score vary between male and female professors?
How do the adjusted \(R^2\) values of m_bty_gen
and m_bty
compare? What does this tell us about how useful gender
is in explaining the variability in evaluation scores when we already have information on the beaty score of the professor.
Compare the slopes of bty_avg
under the two models (m_bty
and m_bty_gen
). Has the addition of gender
to the model changed the parameter estimate (slope) for bty_avg
?
Create a new model called m_bty_rank
with gender
removed and rank
added in. Write the equation of the linear model and interpret the slopes and intercept in context of the data.
Going forward, only consider the following variables as potential predictors: rank
, ethnicity
, gender
, language
, age
, cls_perc_eval
, cls_did_eval
, cls_students
, cls_level
, cls_profs
, cls_credits
, bty_avg
.
Which variable, on its own, would you expect to be the worst predictor of evaluation scores? Why? Hint: Think about which variable would you expect to not have any association with the professor’s score.
Check your suspicions from the previous exercise. Include the model output for that variable in your response.
Suppose you wanted to fit a full model with the variables listed above. If you are already going to include cls_perc_eval
and cls_students
, which variable should you not include as an additional predictor? Why?
Fit a full model with all predictors listed above (except for the one you decided to exclude) in the previous question.
Using backward-selection with adjusted R-squared as the selection criterion, determine the best model. You do not need to show all steps in your answer, just the output for the final model. Also, write out the linear model for predicting score based on the final model you settle on.
Interpret the slopes of one numerical and one categorical predictor based on your final model.
Based on your final model, describe the characteristics of a professor and course at University of Texas at Austin that would be associated with a high evaluation score.
Would you be comfortable generalizing your conclusions to apply to professors generally (at any university)? Why or why not?