Hyperparameter optimization with Scikit-Learn GridSearchCV using different models
2022-07-07
Which performance metrics to use for evaluating a classification model on imbalanced datasets?
2022-07-09
Show all

Understanding the ROC curve and AUC-ROC with Python example

17 mins read

AUC (Area Under the Curve)-ROC(Receiver Characteristic Operator) curve helps us visualize how well our machine learning classifier is performing. Although it works for only binary classification problems, we will see towards the end how we can extend it to evaluate multi-class classification problems too. We’ll cover topics like sensitivity and specificity as well since these are key topics behind the AUC-ROC curve.

What are Sensitivity and Specificity?

This is what a confusion matrix looks like:

Confusion matrix

From the confusion matrix, we can derive some important metrics that were not discussed in the previous article. Let’s talk about them here.

Sensitivity / True Positive Rate / Recall

Sensitivity formula

Sensitivity tells us what proportion of the positive class got correctly classified. A simple example would be to determine what proportion of the actual sick people were correctly detected by the model.

False Negative Rate

False Negative Rate

False Negative Rate (FNR) tells us what proportion of the positive class got incorrectly classified by the classifier. A higher TPR and a lower FNR are desirable since we want to correctly classify the positive class.

Specificity / True Negative Rate

Specificity formula

Specificity tells us what proportion of the negative class got correctly classified. Taking the same example as in Sensitivity, Specificity would mean determining the proportion of healthy people who were correctly identified by the model.

False Positive Rate

False Positive Rate

FPR tells us what proportion of the negative class got incorrectly classified by the classifier. A higher TNR and a lower FPR are desirable since we want to correctly classify the negative class.

Out of these metrics, Sensitivity and Specificity are perhaps the most important and we will see later on how these are used to build an evaluation metric. But before that, let’s understand why the probability of prediction is better than predicting the target class directly.

Probability of Predictions

A machine learning classification model can be used to predict the actual class of the data point directly or predict its probability of belonging to different classes. The latter gives us more control over the result. We can determine our own threshold to interpret the result of the classifier. This is sometimes more prudent than just building a completely new model!

Setting different thresholds for classifying the positive class for data points will inadvertently change the Sensitivity and Specificity of the model. And one of these thresholds will probably give a better result than the others, depending on whether we are aiming to lower the number of False Negatives or False Positives.

Have a look at the table below:

AUC-ROC curve example

The metrics change with the changing threshold values. We can generate different confusion matrices and compare the various metrics that we discussed in the previous section. But that would not be a prudent thing to do. Instead, what we can do is generate a plot between some of these metrics so that we can easily visualize which threshold is giving us a better result. The AUC-ROC curve solves just that problem!

What is the AUC-ROC curve?

The Receiver Operator Characteristic (ROC) curve is an evaluation metric for binary classification problems. It is a probability curve that plots the TPR against FPR at various threshold values and essentially separates the ‘signal’ from the ‘noise’. The Area Under the Curve (AUC) is the measure of the ability of a classifier to distinguish between classes and is used as a summary of the ROC curve. The higher the AUC, the better the performance of the model at distinguishing between the positive and negative classes.

AUC ROC curve

When AUC = 1, then the classifier is able to perfectly distinguish between all the Positive and the Negative class points correctly. If, however, the AUC had been 0, then the classifier would be predicting all Negatives as Positives, and all Positives as Negatives.

AUC ROC curve

When 0.5<AUC<1, there is a high chance that the classifier will be able to distinguish the positive class values from the negative class values. This is so because the classifier can detect more numbers of True positives and True negatives than False negatives and False positives.

AUC ROC random output

When AUC=0.5, then the classifier is not able to distinguish between Positive and Negative class points. Meaning either the classifier is predicting a random class or a constant class for all the data points. So, the higher the AUC value for a classifier, the better its ability to distinguish between positive and negative classes.

AUC of a classifier is equal to the probability that the classifier will rank a randomly chosen positive example higher than that of a randomly chosen negative example. The Area Under the Curve provides the ability for a classifier to distinguish between classes and is used as a summary of the ROC curve. A classifier with a high AUC can occasionally score worse in a specific region than another classifier with a lower AUC. But in practice, the AUC performs well as a general measure of predictive accuracy.

Why use ROC Curves?

One advantage presented by ROC curves is that they aid us in finding a classification threshold that suits our specific problem. For example, if we were evaluating an email spam classifier, we would want the false positive rate to be low. We wouldn’t want someone to lose an important email to the spam filter just because our algorithm was too aggressive. We would probably even allow a fair amount of actual spam emails (true positives) through the filter just to make sure that no important emails were lost. On the other hand, if our classifier is predicting whether someone has a terminal illness, we might be ok with a higher number of false positives (incorrectly diagnosing the illness), just to make sure that we don’t miss any true positives (people who actually have the illness). Additionally, ROC curves and AUC scores also allow us to compare the performance of different classifiers for the same problem.

How Does the AUC-ROC Curve Work?

In a ROC curve, a higher X-axis value indicates a higher number of False positives than True negatives. While a higher Y-axis value indicates a higher number of True positives than False negatives. So, the choice of the threshold depends on the ability to balance between False positives and False negatives. Let’s dig a bit deeper and understand what our ROC curve would look like for different threshold values and how the specificity and sensitivity would vary.

AUC-ROC curve

We can try and understand this graph by generating a confusion matrix for each point corresponding to a threshold and talking about the performance of our classifier:

Sample Confusion matrix

Point A is where the Sensitivity is the highest and Specificity the lowest. This means all the Positive class points are classified correctly and all the Negative class points are classified incorrectly. In fact, any point on the blue line corresponds to a situation where the True Positive Rate is equal to the False Positive Rate.

All points above this line correspond to the situation where the proportion of correctly classified points belonging to the Positive class is greater than the proportion of incorrectly classified points belonging to the Negative class.

Sample Confusion matrix

Although Point B has the same Sensitivity as Point A, it has a higher Specificity. Meaning the number of incorrectly Negative class points is lower compared to the previous threshold. This indicates that this threshold is better than the previous one.

Confusion Matrix AUC ROC

Between points C and D, the Sensitivity at point C is higher than at point D for the same Specificity. This means that for the same number of incorrectly classified Negative class points, the classifier predicted a higher number of Positive class points. Therefore, the threshold at point C is better than point D.

Now, depending on how many incorrectly classified points we want to tolerate for our classifier, we would choose between points B or C for predicting whether you can defeat me in PUBG or not.

“False hopes are more dangerous than fears.”–J.R.R. Tolkein

Confusion Matrix

Point E is where the Specificity becomes highest. Meaning there are no False Positives classified by the model. The model can correctly classify all the Negative class points! We would choose this point if our problem was to give perfect song recommendations to our users.

Going by this logic, can you guess where the point corresponding to a perfect classifier would lie on the graph?

Yes! It would be on the top-left corner of the ROC graph corresponding to the coordinate (0, 1) in the cartesian plane. It is here that both, the Sensitivity and Specificity, would be the highest and the classifier would correctly classify all the Positive and Negative class points.

AUC-ROC Curve in Python

Synthetic Data

Now, either we can manually test the Sensitivity and Specificity for every threshold or let sklearn do the job for us. Let’s create our arbitrary data using the sklearn make_classification method:

# train models
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier

# generate 2 class dataset
X, y = make_classification(n_samples=1000, n_classes=2, n_features=20, n_informative=3, random_state=42)

# split into train/test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

# logistic regression
model1 = LogisticRegression()
# knn
model2 = KNeighborsClassifier(n_neighbors=4)

# fit model
model1.fit(X_train, y_train)
model2.fit(X_train, y_train)

# predict probabilities
pred_prob1 = model1.predict_proba(X_test)
pred_prob2 = model2.predict_proba(X_test)

Sklearn has a very potent method roc_curve() which computes the ROC for your classifier in a matter of seconds! It returns the FPR, TPR, and threshold values:

from sklearn.metrics import roc_curve

# roc curve for models
fpr1, tpr1, thresh1 = roc_curve(y_test, pred_prob1[:,1], pos_label=1)
fpr2, tpr2, thresh2 = roc_curve(y_test, pred_prob2[:,1], pos_label=1)

# roc curve for tpr = fpr 
random_probs = [0 for i in range(len(y_test))]
p_fpr, p_tpr, _ = roc_curve(y_test, random_probs, pos_label=1)

The AUC score can be computed using the roc_auc_score() method of sklearn:

from sklearn.metrics import roc_auc_score

# auc scores
auc_score1 = roc_auc_score(y_test, pred_prob1[:,1])
auc_score2 = roc_auc_score(y_test, pred_prob2[:,1])

print(auc_score1, auc_score2)
0.9761029411764707 0.9233769727403157

We can also plot the ROC curves for the two algorithms using matplotlib:

# matplotlib
import matplotlib.pyplot as plt
plt.style.use('seaborn')

# plot roc curves
plt.plot(fpr1, tpr1, linestyle='--',color='orange', label='Logistic Regression')
plt.plot(fpr2, tpr2, linestyle='--',color='green', label='KNN')
plt.plot(p_fpr, p_tpr, linestyle='--', color='blue')
# title
plt.title('ROC curve')
# x label
plt.xlabel('False Positive Rate')
# y label
plt.ylabel('True Positive rate')

plt.legend(loc='best')
plt.savefig('ROC',dpi=300)
plt.show();
Binary class ROC curve

It is evident from the plot that the AUC for the Logistic Regression ROC curve is higher than that for the KNN ROC curve. Therefore, we can say that logistic regression did a better job of classifying the positive class in the dataset.

Real-world Data: Heart Disease Prediction

To demonstrate how the ROC curve is constructed in practice, I’m going to work with the Heart Disease UCI data set in Python. The data set has 14 attributes and 303 observations and is typically used to predict whether a patient has heart disease based on the other 13 attributes, which include age, sex, cholesterol level, and other measurements.

Imports & Loading Data

# Imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

# Load data
df = pd.read_csv('heart.csv')
df.head()

Train-Test Split

For this analysis, I’ll use a standard 75% — 25% train-test split.

# Split data into train and test sets
from sklearn.model_selection import train_test_split

X = df.drop('target', axis=1)
y = df.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25,
                                                   random_state=56)

Logistic Regression Classifier

Before I write a function to calculate false positive and true positive rates, I’ll fit a vanilla Logistic Regression classifier on the training data, and make predictions on the test set.

# Fit a vanilla Logistic Regression classifier and make predictions
from sklearn.linear_model import LogisticRegression

clf = LogisticRegression(max_iter=1000)
clf.fit(X_train, y_train)
y_pred_test = clf.predict(X_test)

Calculating True Positive Rate and False Positive Rate

Now that I have test predictions, I can write a function to calculate the true positive rate and false-positive rate. This is a critical step, as these are the two variables needed to produce the ROC curve.

# Function to calculate True Positive Rate and False Positive Rate

def calc_TP_FP_rate(y_true, y_pred):
    
    # Convert predictions to series with index matching y_true
    y_pred = pd.Series(y_pred, index=y_true.index)
    
    # Instantiate counters
    TP = 0
    FP = 0
    TN = 0
    FN = 0

    # Determine whether each prediction is TP, FP, TN, or FN
    for i in y_true.index: 
        if y_true[i]==y_pred[i]==1:
           TP += 1
        if y_pred[i]==1 and y_true[i]!=y_pred[i]:
           FP += 1
        if y_true[i]==y_pred[i]==0:
           TN += 1
        if y_pred[i]==0 and y_test[i]!=y_pred[i]:
           FN += 1
    
    # Calculate true positive rate and false positive rate
    tpr = TP / (TP + FN)
    fpr = FP / (FP + TN)

    return tpr, fpr

# Test function

calc_TP_FP_rate(y_test, y_pred_test)
(0.6923076923076923, 0.1891891891891892)

The test shows that the function appears to be working — a true positive rate of 69% and a false positive rate of 19% are perfectly reasonable results.

Exploring varying thresholds

To obtain the ROC curve, I need more than one pair of true positive/false positive rates. I need to vary the threshold probability that the Logistic Regression classifier uses to predict whether a patient has heart disease (target=1) or doesn’t (target=0). Remember, while Logistic Regression is used to assign a class label, what it’s actually doing is determining the probability that an observation belongs to a specific class. In a typical binary classification problem, an observation must have a probability of > 0.5 to be assigned to the positive class. However, in this case, I will vary that threshold probability value incrementally from 0 to 1. This will result in the ranges of true positive rates and false-positive rates that allow me to build the ROC curve.

In the code blocks below, I obtain these true positive rates and false-positive rates across a range of threshold probability values. For comparison, I use logistic regression with (1) no regularization and (2) L2 regularization.

# LOGISTIC REGRESSION (NO REGULARIZATION)

# Fit and predict test class probabilities
lr = LogisticRegression(max_iter=1000, penalty='none')
lr.fit(X_train, y_train)
y_test_probs = lr.predict_proba(X_test)[:,1]

# Containers for true positive / false positive rates
lr_tp_rates = []
lr_fp_rates = []

# Define probability thresholds to use, between 0 and 1
probability_thresholds = np.linspace(0,1,num=100)

# Find true positive / false positive rate for each threshold
for p in probability_thresholds:
    
    y_test_preds = []
    
    for prob in y_test_probs:
        if prob > p:
            y_test_preds.append(1)
        else:
            y_test_preds.append(0)
            
    tp_rate, fp_rate = calc_TP_FP_rate(y_test, y_test_preds)
        
    lr_tp_rates.append(tp_rate)
    lr_fp_rates.append(fp_rate)
# LOGISTIC REGRESSION (L2 REGULARIZATION)

# Fit and predict test class probabilities
lr_l2 = LogisticRegression(max_iter=1000, penalty='l2')
lr_l2.fit(X_train, y_train)
y_test_probs = lr_l2.predict_proba(X_test)[:,1]

# Containers for true positive / false positive rates
l2_tp_rates = []
l2_fp_rates = []

# Define probability thresholds to use, between 0 and 1
probability_thresholds = np.linspace(0,1,num=100)

# Find true positive / false positive rate for each threshold
for p in probability_thresholds:
    
    y_test_preds = []
    
    for prob in y_test_probs:
        if prob > p:
            y_test_preds.append(1)
        else:
            y_test_preds.append(0)
            
    tp_rate, fp_rate = calc_TP_FP_rate(y_test, y_test_preds)
        
    l2_tp_rates.append(tp_rate)
    l2_fp_rates.append(fp_rate)

Plotting the ROC Curves

# Plot ROC curves

fig, ax = plt.subplots(figsize=(6,6))
ax.plot(lr_fp_rates, lr_tp_rates, label='Logistic Regression')
ax.plot(l2_fp_rates, l2_tp_rates, label='L2 Logistic Regression')
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.legend();

Both versions of the logistic regression classifier seem to do a pretty good job, but the L2 regularized version appears to perform slightly better.

Calculating AUC scores

sklearn has an auc() function, which I’ll make use of here to calculate the AUC scores for both versions of the classifier. auc() takes in the true positive and false positive rates we previously calculated and returns the AUC score.

# Get AUC scores

from sklearn.metrics import auc

print(f'Logistic Regression (No reg.) AUC {auc(lr_fp_rates, lr_tp_rates)}')
print(f'Logistic Regression (L2 reg.) AUC {auc(l2_fp_rates, l2_tp_rates)}')

As expected, the classifiers both have similar AUC scores, with the L2 regularized version performing slightly better.

The easy way for ROC curves and AUC

Now that we’ve had fun plotting these ROC curves from scratch, you’ll be relieved to know that there is a much, much easier way. sklearn’s plot_roc_curve() function can efficiently plot ROC curves using only a fitted classifier and test data as input. These plots conveniently include the AUC score as well.

# Use sklearn to plot ROC curves

from sklearn.metrics import plot_roc_curve

plot_roc_curve(lr, X_test, y_test, name = 'Logistic Regression')
plot_roc_curve(lr_l2, X_test, y_test, name = 'L2 Logistic Regression');

AUC-ROC for Multi-Class Classification

Like I said before, the AUC-ROC curve is only for binary classification problems. But we can extend it to multiclass classification problems by using the One vs All technique. So, if we have three classes 0, 1, and 2, the ROC for class 0 will be generated as classifying 0 against not 0, i.e. 1 and 2. The ROC for class 1 will be generated as classifying 1 against not 1, and so on. The ROC curve for multi-class classification models can be determined as below:

# multi-class classification
from sklearn.multiclass import OneVsRestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score

# generate 2 class dataset
X, y = make_classification(n_samples=1000, n_classes=3, n_features=20, n_informative=3, random_state=42)

# split into train/test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

# fit model
clf = OneVsRestClassifier(LogisticRegression())
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
pred_prob = clf.predict_proba(X_test)

# roc curve for classes
fpr = {}
tpr = {}
thresh ={}

n_class = 3

for i in range(n_class):    
    fpr[i], tpr[i], thresh[i] = roc_curve(y_test, pred_prob[:,i], pos_label=i)
    
# plotting    
plt.plot(fpr[0], tpr[0], linestyle='--',color='orange', label='Class 0 vs Rest')
plt.plot(fpr[1], tpr[1], linestyle='--',color='green', label='Class 1 vs Rest')
plt.plot(fpr[2], tpr[2], linestyle='--',color='blue', label='Class 2 vs Rest')
plt.title('Multiclass ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive rate')
plt.legend(loc='best')
plt.savefig('Multiclass ROC',dpi=300);    
Multiclass ROC

End Notes

I hope you found this article useful in understanding how powerful the AUC-ROC curve metric is in measuring the performance of a classifier. You’ll use this a lot in the industry and even in data science or machine learning hackathons

Resources:

https://towardsdatascience.com/understanding-the-roc-curve-and-auc-dd4f9a192ecb

AUC-ROC

http://madrury.github.io/jekyll/update/statistics/2017/06/21/auc-proof.html

https://www.dataschool.io/roc-curves-and-auc-explained/

Amir Masoud Sefidian
Amir Masoud Sefidian
Machine Learning Engineer

Comments are closed.