Learning Curve
Imagine you have four classifiers with similar accuracies. Are they really similar? Plotting a learning curve might reveal a hidden side to these classifiers.
The four classifiers
These are four classifiers from a data science notebook with their respective accuracies:
- Logistic Regression: 0.663
- Random Forest: 0.658
- GaussianNB: 0.652
- kNN: 0.625
All these classifiers show similar accuracies. So, should we pick the logistic regression or the random forest classifier? Let us explore more with learning curves for these classifiers.
The learning curve
The learning curve plots training and validation accuracies across varying training set sizes. It can reveal overfit and whether more training data will help improve the classifier.
We can observe a couple of things from these curves:
- Logistic regression and GaussianNB strike a good balance between bias and variance.
- The random forest is clearly overfitting.
Interpreting the learning curve
As the legend indicates, the plot has two curves - one for the training fold and another for the cross-validation fold.
- The wide gap between the curves of the random forest indicates a severe overfit. The classifier is giving almost 100% accuracy on the training data. Similarly, the kNN is also showing a degree of overfitting.
- The flattening out of the validation-fold curve shows that adding more data won’t improve classifier performance. All the four classifiers show this.
Verifying the overfit
We stated that the random forest classifier has overfitted. Can we plot a
validation curve
for this classifier. The pair of curves mean the same as explained previously.
We have plotted a validation curve for the random forest varying the max_depth
hyperparameter that varies the maximum tree depth. Sure enough, we can clearly
see the classifier overfitting above the tree depth of about 4.
For the complete code in context, refer to this Kaggle notebook.
You might have noticed we ignored the dummy classifier which seems to be doing almost as well as the others. Does this reveal something?