Author: alexcphung
Performance metrics recall precision
A great metric that should always be used when dealing with the classification problem is the confusion matrix.
The accuracy of the model is basically the total number of correct predictions divided by the total number of predictions.
The precision of a class defines how trustable is the result when the model answers that a point belongs to that class.
The recall of a class expresses how well the model is able to detect that class.
The F1 score of a class is given by the harmonic mean of precision and recall (2×precision×recall / (precision + recall)), it combines precision and recall of a class in one metric.
For a given class, the different combinations of recall and precision:
high recall + high precision: the class is perfectly handled by the model
low recall + high precision: the model can’t detect the class well but is highly trustable when it does
high recall + low precision: the class is well detected but the model also includes points of other classes in it
low recall + low precision: the class is poorly handled by the model
Another good metric is the ROC curve (standing for Receiver Operating Characteristic), defined with respect to a given class. (I will talk about it in the next post)