What is a confusion matrix?
A confusion matrix is a table used in machine learning and statistical classification to evaluate the performance of a model. It summarizes the actual and predicted classifications of a model’s predictions in a tabular format.
- True Positive (TP): The model correctly predicted a positive classification (e.g., correctly identified a patient as having a disease).
- False Positive (FP): The model incorrectly predicted a positive classification (e.g., falsely identified a healthy patient as having a disease).
- True Negative (TN): The model correctly predicted a negative classification (e.g., correctly identified a healthy patient as not having a disease).
- False Negative (FN): The model incorrectly predicted a negative classification (e.g., falsely identified a patient with a disease as healthy).
The confusion matrix provides a summary of the model’s performance, including metrics such as accuracy, precision, recall, and F1 score. These metrics can be used to evaluate the strengths and weaknesses of a model and guide improvements.
The confusion matrix provides a summary of the model’s performance, including metrics such as accuracy, precision, recall, and F1 score. These metrics can be used to evaluate the strengths and weaknesses of a model and guide improvements.