Introduction

In this practical, we will dive deeper into assessing classification methods and we will perform classification using tree-based methods.

We will use the packages pROC, rpart, rpart.plot, and randomForest. For this, you will probably need to install.packages() before running the library() functions.

library(MASS)
library(ISLR)
library(tidyverse)

library(pROC)

library(rpart)
library(rpart.plot)
library(randomForest)

Before starting, it is always wise to specify a seed.

set.seed(45)

Confusion matrix, continued

In the data/ folder there is a cardiovascular disease dataset of 253 patients. The goal is to predict whether a patient will respond to treatment based on variables in this dataset:

  • severity of the disease (low/high)
  • age of the patient
  • gender of the patient
  • bad behaviour score (e.g. smoking/drinking)
  • prior occurrence of the cardiovascular disease (family history)
  • dose of the treatment administered: 1 (lowest), 2 (medium), or 3 (highest)

  1. Create a logistic regression model lr_mod for this data using the formula response ~ . and create a confusion matrix based on a .5 cutoff probability.

treat <- read_csv("data/cardiovascular_treatment.csv") %>% 
  mutate(severity = as.factor(severity),
         gender   = as.factor(gender),
         dose     = as.factor(dose),
         response = as.factor(response))

lr_mod <- glm(response ~ ., "binomial", treat)


prob_lr <- predict(lr_mod, type = "response")
pred_lr <- ifelse(prob_lr > .5, 1, 0)

table(true = treat$response, pred = pred_lr)
##     pred
## true  0  1
##    0 80 47
##    1 29 97

Confusion matrix metrics


  1. Calculate the accuracy, true positive rate (sensitivity), the true negative rate (specificity), the false positive rate, the positive predictive value, and the negative predictive value. You can use the confusion matrix table on wikipedia. What can you say about the model performance? Which metrics are most relevant if this model were to be used in the real world?

cmat_lr <- table(true = treat$response, pred = pred_lr)

TN <- cmat_lr[1, 1]
FN <- cmat_lr[2, 1]
FP <- cmat_lr[1, 2]
TP <- cmat_lr[2, 2]

tibble(
  Acc = (TP + TN) / sum(cmat_lr),
  TPR = TP / (TP + FN),
  TNR = TN / (TN + FP),
  FPR = FP / (TN + FP),
  PPV = TP / (TP + FP),
  NPV = TN / (TN + FN)
)
# Accuracy is .7, meaning that 30% of the patients are misclassified

# [TPR] If the patient will respond to treatment, there is an 77% probability 
# that the model will detect this

# [TNR] If the patient will not respond to treatment, there is a 63% prob
# that the model will detect this

# [FPR] If the patient does not respond to treatment, there is a 37% chance
# he or she will anyway be predicted to respond to the treatment

# [PPV] If the patient is predicted to respond to the treatment, there is a
# 67% chance they will actually respond to the treatment

# [NPV] If the patient is predicted to not respond to the treatment, there is
# a 73% probability that they will indeed not respond to the treatment

# The last two metrics are very relevant: if a new patient comes in you will
# only know the prediction and not the true value

  1. Create an LDA model lda_mod for the same prediction problem. Compare its performance to the LR model.

lda_mod <- lda(response ~ ., treat)


pred_lda <- predict(lda_mod)$class

cmat_lda <- table(true = treat$response, pred = pred_lda)

TN <- cmat_lda[1, 1]
FN <- cmat_lda[2, 1]
FP <- cmat_lda[1, 2]
TP <- cmat_lda[2, 2]

# PPV
TP / (TP + FP)
## [1] 0.6736111
# NPV
TN / (TN + FN)
## [1] 0.733945
# The performance is exactly the same

  1. Compare the classification performance of lr_mod and lda_mod for the new patients in the data/new_patients.csv.

new_patients <- read_csv("data/new_patients.csv") %>% 
  mutate(severity = as.factor(severity),
         gender   = as.factor(gender),
         dose     = as.factor(dose),
         response = as.factor(response))

pred_lda_new <- predict(lda_mod, newdata = new_patients)$class
prob_lr_new <- predict(lr_mod, newdata = new_patients, type = "response")
pred_lr_new <- ifelse(prob_lr_new > .5, 1, 0)

# lda
cmat_lda_new <- table(true = new_patients$response, pred = pred_lda_new)


# lr
cmat_lr_new <- table(true = new_patients$response, pred = pred_lr_new)

cmat_lda_new
##     pred
## true  0  1
##    0 16 11
##    1  9 14
cmat_lr_new
##     pred
## true  0  1
##    0 16 11
##    1  9 14
# again, the same performance


# let's look at ppv and npv then
PPV <- cmat_lda_new[2, 2] / sum(cmat_lda_new[, 2])
NPV <- cmat_lda_new[1, 1] / sum(cmat_lda_new[, 1])

PPV
## [1] 0.56
NPV
## [1] 0.64
# Out-of-sample ppv and npv are worse, as expected
# The models perform only slightly above chance level!

Brier score


Calculate the out-of-sample brier score for the lr_mod and give an interpretation of this number.


mean((prob_lr_new - (as.numeric(new_patients$response) - 1)) ^ 2)
## [1] 0.2283307
# the mean squared difference between the probability and the true class is .23

ROC curve


  1. Create two LR models: lr1_mod with severity, age, and bb_score as predictors, and lr2_mod with the formula response ~ age + I(age^2) + gender + bb_score * prior_cvd * dose. Save the predicted probabilities on the training data.

lr1_mod <- glm(response ~ severity + bb_score + age, 
               family = "binomial", data = treat)
prob_lr1 <- predict(lr1_mod, type = "response")

lr2_mod <- glm(response ~ age + I(age^2) + gender + bb_score * prior_cvd * dose, 
               family = "binomial", data = treat)
prob_lr2 <- predict(lr2_mod, type = "response")

  1. Use the function roc() from the pROC package to create two ROC objects with the predicted probabilities: roc_lr1 and roc_lr2. Use the ggroc() method on these objects to create an ROC curve plot for each. Which model performs better? Why?

roc_lr1 <- roc(treat$response, prob_lr1)
## Setting levels: control = 0, case = 1
## Setting direction: controls < cases
roc_lr2 <- roc(treat$response, prob_lr2)
## Setting levels: control = 0, case = 1
## Setting direction: controls < cases
ggroc(roc_lr1) + theme_minimal() + labs(title = "LR1")

ggroc(roc_lr2) + theme_minimal() + labs(title = "LR2")

# The LR2 model performs better: at just about every cutoff value, both the
# sensitivity and the specificity are higher than that of the LR1 model.

  1. Print the roc_lr1 and roc_lr2 objects. Which AUC value is higher? How does this relate to the plots you made before? What is the minimum AUC value and what would a “perfect” AUC value be and how would it look in a plot?

roc_lr1
## 
## Call:
## roc.default(response = treat$response, predictor = prob_lr1)
## 
## Data: prob_lr1 in 127 controls (treat$response 0) < 126 cases (treat$response 1).
## Area under the curve: 0.6253
roc_lr2
## 
## Call:
## roc.default(response = treat$response, predictor = prob_lr2)
## 
## Data: prob_lr2 in 127 controls (treat$response 0) < 126 cases (treat$response 1).
## Area under the curve: 0.7405
# lr2 has a much higher AUC (area under the ROC curve). It represents the area
# under the curve we drew before. The minimum AUC value is 0.5 and the maximum
# is 1. That would look like this in a plot:

ggplot(data.frame(x = c(1, 1, 0), y = c(0, 1, 1)), 
       aes(x = x, y = y)) +
  geom_line() +
  xlim(1, 0) +
  labs(y = "sensitivity", 
       x = "specificity", 
       title = "Perfect model") +
  theme_minimal()

# An slightly intuitive interpretation of the AUC value:
# if we pick one person who does not respond to treatment and one who does, 
# AUC is the probability that the classifier ranks the person who 
# responds to treatment higher.

Iris dataset

One of the most famous classification datasets is a dataset used in R.A. Fisher’s 1936 paper on linear discriminant analysis: the iris dataset. Fisher’s goal was to classify the three subspecies of iris according to the attributes of the plants: Sepal.Length, Sepal.Width, Petal.Length, and Petal.Width:

source: kaggle

The paper includes a hand-drawn graph worth looking at:

We can reproduce this graph using the first linear discriminant from the lda() function:

# fit lda model, i.e. calculate model parameters
lda_iris <- lda(Species ~ ., data = iris)

# use those parameters to compute the first linear discriminant
first_ld <- -c(as.matrix(iris[, -5]) %*% lda_iris$scaling[,1])

# plot
tibble(
  ld = first_ld,
  Species = iris$Species
) %>% 
  ggplot(aes(x = ld, fill = Species)) +
  geom_histogram(binwidth = .5, position = "identity", alpha = .9) +
  scale_fill_viridis_d(guide = ) +
  theme_minimal() +
  labs(
    x = "Discriminant function",
    y = "Frequency", 
    main = "Fisher's linear discriminant function on Iris species"
  ) + 
  theme(legend.position = "top")


  1. Explore the iris dataset using summaries and plots.

summary(iris)
##   Sepal.Length    Sepal.Width     Petal.Length    Petal.Width   
##  Min.   :4.300   Min.   :2.000   Min.   :1.000   Min.   :0.100  
##  1st Qu.:5.100   1st Qu.:2.800   1st Qu.:1.600   1st Qu.:0.300  
##  Median :5.800   Median :3.000   Median :4.350   Median :1.300  
##  Mean   :5.843   Mean   :3.057   Mean   :3.758   Mean   :1.199  
##  3rd Qu.:6.400   3rd Qu.:3.300   3rd Qu.:5.100   3rd Qu.:1.800  
##  Max.   :7.900   Max.   :4.400   Max.   :6.900   Max.   :2.500  
##        Species  
##  setosa    :50  
##  versicolor:50  
##  virginica :50  
##                 
##                 
## 
# Some example plots you could make
iris %>% 
  ggplot(aes(x = Sepal.Length, y = Petal.Length, colour = Species)) + 
  geom_point() +
  scale_colour_viridis_d() +
  theme_minimal() +
  ggtitle("Lengths")

iris %>% 
  ggplot(aes(x = Sepal.Width, y = Petal.Width, colour = Species)) + 
  geom_point() +
  scale_colour_viridis_d() +
  theme_minimal() +
  ggtitle("Widths")

# The plots indicate quite strong separation between the classes

  1. Fit an additional LDA model, but this time with only Sepal.Length and Sepal.Width as predictors. Call this model lda_iris_sepal

lda_iris_sepal <- lda(Species ~ Sepal.Length + Sepal.Width, data = iris)

  1. Create a confusion matrix of the lda_iris and lda_iris_sepal models. (NB: we did not split the dataset into training and test set, so use the training dataset to generate the predictions.). Which performs better in terms of accuracy?

# lda_iris
table(true = iris$Species, predicted = predict(lda_iris)$class)
##             predicted
## true         setosa versicolor virginica
##   setosa         50          0         0
##   versicolor      0         48         2
##   virginica       0          1        49
# lda_iris_sepal
table(true = iris$Species, predicted = predict(lda_iris_sepal)$class)
##             predicted
## true         setosa versicolor virginica
##   setosa         49          1         0
##   versicolor      0         36        14
##   virginica       0         15        35
# lda_iris performs better: sum(off-diagonal) is lower.

Classification trees

Classification trees in R can be fit using the rpart() function.


  1. Use rpart() to create a classification tree for the Species of iris. Call this model iris_tree_mod. Plot this model using rpart.plot().

iris_tree_mod <- rpart(Species ~ ., data = iris)
rpart.plot(iris_tree_mod)


  1. How would an iris with 2.7 cm long and 1.5 cm wide petals be classified?

# As an Iris Versicolor, following the tree from the top to the bottom.

Because the classification tree only uses two variables, we can create another insightful plot using the splits on these variables.


  1. Create a scatterplot where you map Petal.Length to the x position and Petal.Width to the y position. Then, manually add a vertical and a horizontal line (using geom_segment) at the locations of the splits from the classification tree. Interpret this plot.

iris %>% 
  ggplot(aes(x = Petal.Length, y = Petal.Width, colour = Species)) +
  geom_point() +
  geom_segment(aes(x = 2.5, xend = 2.5, y = -Inf, yend = Inf),
               colour = "black") +
  geom_segment(aes(x = 2.5, xend = Inf, y = 1.75, yend = 1.75), 
               colour = "black") +
  scale_colour_viridis_d() +
  theme_minimal()

# The first split perfectly separates setosa from the other two
# the second split leads to 5 misclassifications: 
# virginica classified as versicolor

There are several control parameters (tuning parameters) to the rpart() algorithm. You can find the available control parameters using ?rpart.control.


  1. Create a classification tree model where the splits continue until all the observations have been classified. Call this model iris_tree_full_mod. Plot this model using rpart.plot(). Do you expect this model to perform better or worse on new Irises?

iris_tree_full_mod <- rpart(Species ~ ., data = iris, 
                            control = rpart.control(minbucket = 1, cp = 0))

rpart.plot(iris_tree_full_mod)

# Answer using bias-variance tradeoff, e.g.,  We do not know for sure, but the
# second model probably has too much variance to perform well on new samples.

Final assignment: Random forest for classification


  1. Use the function randomForest() to create a random forest model on the iris dataset. Use the function importance() on this model and create a bar plot of variable importance. Does this agree with your expectations? How well does the random forest model perform compared to the lda_iris model?

rf_mod <- randomForest(Species ~ ., data = iris)

var_imp <- importance(rf_mod)
tibble(
  importance = c(var_imp), 
  variable = rownames(var_imp)
) %>% 
  ggplot(aes(x = variable, y = importance, fill = variable)) +
  geom_bar(stat = "identity") +
  scale_fill_viridis_d() +
  theme_minimal() +
  labs(
    x = "Variable", 
    y = "Mean reduction in Gini coefficient", 
    title = "Variable importance"
  )

# This agrees with our expectations as the Petal is more important in the 
# other methods we used as well.

rf_mod
## 
## Call:
##  randomForest(formula = Species ~ ., data = iris) 
##                Type of random forest: classification
##                      Number of trees: 500
## No. of variables tried at each split: 2
## 
##         OOB estimate of  error rate: 4%
## Confusion matrix:
##            setosa versicolor virginica class.error
## setosa         50          0         0        0.00
## versicolor      0         47         3        0.06
## virginica       0          3        47        0.06
table(iris$Species, predict(lda_iris)$class)
##             
##              setosa versicolor virginica
##   setosa         50          0         0
##   versicolor      0         48         2
##   virginica       0          1        49
# The lda model actually performs slightly better in terms of within-sample
# accuracy. However, to compare the out-of-sample accuracy you will need to
# perform for example cross validation with the lda() method.

Hand-in

When you have finished the practical,

  • enclose all files of the project 07_classification_evaluation.Rproj (i.e. all .R and/or .Rmd files including the one with your answers, and the .Rproj file) in a zip file, and

  • hand in the zip by PR from your fork here. Do so before Lecture 9. That way we can iron out issues during the next Q&A in Week 8.