Confusion (or Error) matrices
In machine learning, the model needs to learn from errors it makes. An example of how this is implemented is in online advertising – users are occasionally asked if the ads they are receiving are relevant to their interests. The positive or negative answers feed into the machine learning model and allows it to correct its predictions to become more accurate. This is typically more relevant in supervised learning but there are ways of feeding back mistaken outputs in unsupervised learning methods as well.
A confusion matrix (or error matrix)[1] allows the visualisation of the performance of an algorithm in supervised methods (in unsupervised methods it is called a matching matrix).
For example, consider an algorithm that distinguishes between cats and dogs, with a sample of 8 cats and 5 dogs. When complete, it correctly predicted 5 cats as being cats, but predicted 3 cats as being dogs.
The prediction errors are highlighted in red, and the correct predictions are in green. These can be classified as negatives and positives, and the confusion matrix for the cat class looks as follows:
A false positive is called a Type I error, and a false negative is a Type II error.
Statistical analysis can be carried out on these results, including True Positive Rate (TPR), Sensitivity, False Positive Rate (FPR), Precision and Specificity to name a few.