AUC-ROC analysis with Python

AUC-ROC analysis is a commonly used method to evaluate the performance of binary classification models. It measures the ability of the model to distinguish between positive and negative classes. Step 1: Import required libraries and load the data
import numpy as np
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt

# Load your data here

Step 2: Fit your model and make predictions
# Fit your model and make predictions
# Example:
# model.fit(X_train, y_train)
# y_pred = model.predict_proba(X_test)[:,1]

Step 3: Calculate AUC-ROC score and plot the ROC curve
# Calculate AUC-ROC score
auc_score = roc_auc_score(y_test, y_pred)

# Plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % auc_score)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()

Comments

Popular posts from this blog

Seven common machine learning evaluation metrics

How does Python handle dynamic typing?