Seven common machine learning evaluation metrics
Seven common machine learning evaluation metrics are essential in assessing the performance of machine learning models. These metrics include accuracy, precision, recall, F1 score, ROC-AUC, mean squared error, and R-squared. Accuracy measures the percentage of correctly classified instances, precision measures the ratio of correctly predicted positive observations to the total predicted positive observations, recall measures the ratio of correctly predicted positive observations to the total actual positive observations, F1 score is the harmonic mean of precision and recall, ROC-AUC calculates the area under the receiver operating characteristic curve, mean squared error measures the average of the squares of the errors, and R-squared represents the proportion of the variance in the dependent variable that is predictable from the independent variables.
1. Accuracy:
Accuracy is the most basic evaluation metric, representing the ratio of correctly predicted instances to the total instances in the dataset.
2. Precision: Precision measures the correctness of positive predictions, calculated as the ratio of true positive predictions to the sum of true positive and false positive predictions.
3. Recall: Recall evaluates the model's ability to capture positive instances, calculated as the ratio of true positive predictions to the sum of true positive and false negative predictions.
from sklearn.metrics import accuracy_score y_true = [0, 1, 1, 0, 1] y_pred = [0, 1, 0, 0, 1] accuracy = accuracy_score(y_true, y_pred) print("Accuracy:", accuracy)
2. Precision: Precision measures the correctness of positive predictions, calculated as the ratio of true positive predictions to the sum of true positive and false positive predictions.
from sklearn.metrics import precision_score precision = precision_score(y_true, y_pred) print("Precision:", precision)
3. Recall: Recall evaluates the model's ability to capture positive instances, calculated as the ratio of true positive predictions to the sum of true positive and false negative predictions.
from sklearn.metrics import recall_score recall = recall_score(y_true, y_pred) print("Recall:", recall)
Comments
Post a Comment