Metrics & Overfitting
Last updated
Last updated
Our machine learning workflow incorporates a robust set of metrics to evaluate model performance for both regression and classification tasks. This document provides an overview of these metrics and how they are calculated and used in our system.
The _calculate_metrics
method is the core function responsible for computing various performance metrics. It handles both regression and classification tasks, as well as multi-output scenarios.
Data Preparation:
Ensures all inputs (y_true, y_pred, y_train, y_train_pred) are 2D numpy arrays.
Adjusts prediction arrays to match the shape of true values.
Target-specific Metrics:
Iterates through each target, calculating metrics based on the target type (categorical or numeric).
Overall Metrics:
Computes average metrics across all targets.
Overfitting Detection:
Calculates metrics to detect and quantify potential overfitting.
For categorical targets, the following metrics are calculated:
Accuracy:
Ratio of correct predictions to total predictions.
Calculated using accuracy_score
from scikit-learn.
Precision:
Ratio of true positive predictions to total positive predictions.
Calculated using precision_score
with weighted average.
Recall:
Ratio of true positive predictions to total actual positives.
Calculated using recall_score
with weighted average.
F1 Score:
Harmonic mean of precision and recall.
Calculated using f1_score
with weighted average.
For numeric targets, the following metrics are calculated:
Mean Absolute Error (MAE):
Average absolute difference between predicted and actual values.
Calculated using mean_absolute_error
from scikit-learn.
Mean Squared Error (MSE):
Average squared difference between predicted and actual values.
Calculated using mean_squared_error
from scikit-learn.
R-squared (R2) Score:
Proportion of variance in the dependent variable predictable from the independent variable(s).
Calculated using r2_score
from scikit-learn.
To detect and quantify overfitting, we calculate:
Performance Difference:
Difference between average train performance and average validation performance.
Performance Ratio:
Ratio of average train performance to average validation performance.
Overfitting Flag:
Set to True
if performance difference > 0.15 or performance ratio > 1.3.
Overfitting Score:
Maximum of performance difference and (performance ratio - 1).