Mean squared error calculation with Python
Mean squared error (MSE) is a commonly used metric to evaluate the performance of a regression model. It measures the average squared difference between the actual and predicted values.
To calculate the mean squared error, we first need to compute the squared error for each data point by subtracting the actual value from the predicted value and then squaring the result. Next, we take the average of all these squared errors to get the final MSE value.
Here is the code in Python to calculate the mean squared error:
In this code, we define a function mean_squared_error that takes two lists of actual and predicted values as input. We then calculate the squared error for each data point and compute the average to get the MSE. Finally, we print out the MSE value for the example input.
def mean_squared_error(actual, predicted): squared_errors = [(actual[i] - predicted[i])**2 for i in range(len(actual))] mse = sum(squared_errors) / len(actual) return mse # Example usage actual_values = [2, 4, 6, 8, 10] predicted_values = [1.5, 3.5, 5.5, 7.5, 9.5] mse = mean_squared_error(actual_values, predicted_values) print("Mean Squared Error:", mse)
In this code, we define a function mean_squared_error that takes two lists of actual and predicted values as input. We then calculate the squared error for each data point and compute the average to get the MSE. Finally, we print out the MSE value for the example input.
Comments
Post a Comment