Evaluation Metrics

Our machine learning workflow incorporates a robust set of metrics to evaluate model performance for both regression and classification tasks. This document provides an overview of these metrics and how they are calculated and used in our system.

Overview

The _calculate_metrics method is the core function responsible for computing various performance metrics. It handles both regression and classification tasks, as well as multi-output scenarios.

Metrics Calculation Process

  1. Data Preparation:

    • Ensures all inputs (y_true, y_pred, y_train, y_train_pred) are 2D numpy arrays.

    • Adjusts prediction arrays to match the shape of true values.

  2. Target-specific Metrics:

    • Iterates through each target, calculating metrics based on the target type (categorical or numeric).

  3. Overall Metrics:

    • Computes average metrics across all targets.

  4. Overfitting Detection:

    • Calculates metrics to detect and quantify potential overfitting.

Classification Metrics

For categorical targets, the following metrics are calculated:

  1. Accuracy:

    • Ratio of correct predictions to total predictions.

    • Calculated using accuracy_score from scikit-learn.

  2. Precision:

    • Ratio of true positive predictions to total positive predictions.

    • Calculated using precision_score with weighted average.

  3. Recall:

    • Ratio of true positive predictions to total actual positives.

    • Calculated using recall_score with weighted average.

  4. F1 Score:

    • Harmonic mean of precision and recall.

    • Calculated using f1_score with weighted average.

Regression Metrics

For numeric targets, the following metrics are calculated:

  1. Mean Absolute Error (MAE):

    • Average absolute difference between predicted and actual values.

    • Calculated using mean_absolute_error from scikit-learn.

  2. Mean Squared Error (MSE):

    • Average squared difference between predicted and actual values.

    • Calculated using mean_squared_error from scikit-learn.

  3. R-squared (R2) Score:

    • Proportion of variance in the dependent variable predictable from the independent variable(s).

    • Calculated using r2_score from scikit-learn.

Overfitting Detection

To detect and quantify overfitting, we calculate:

  1. Performance Difference:

    • Difference between average train performance and average validation performance.

  2. Performance Ratio:

    • Ratio of average train performance to average validation performance.

  3. Overfitting Flag:

    • Set to True if performance difference > 0.15 or performance ratio > 1.3.

  4. Overfitting Score:

    • Maximum of performance difference and (performance ratio - 1).

Usage in the Workflow

The metrics are calculated after model training and prediction:

Last updated