🐶

# 【ML】Evaluation Metrics

2024/07/23に公開

I'll write the basic evaluation metrics for machine learning models.
The other evaluation metrics also exist of course.

Sure, here's a detailed overview of each evaluation metric, along with their formulas:

### 1. Accuracy

・Definition
The ratio of correctly predicted observations to the total observations. It is suitable for balanced datasets but can be misleading for imbalanced datasets.
・Formula
\text{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN}

### 2. Precision

・Definition
The ratio of correctly predicted positive observations to the total predicted positives. It is a measure of a classifier’s exactness.
・Formula
\text{Precision} = \frac{TP}{TP + FP}

### 3. Recall (Sensitivity)

・Definition
The ratio of correctly predicted positive observations to all observations in the actual class. It is a measure of a classifier’s completeness.
・Formula
\text{Recall} = \frac{TP}{TP + FN}

### 4. F1 Score

・Definition
The weighted average of Precision and Recall. The F1 Score is more useful than accuracy, especially when you have an uneven class distribution.
・Formula
\text{F1 Score} = 2 \cdot \frac{\text{Precision} \cdot \text{Recall}}{\text{Precision} + \text{Recall}}

### 5. ROC-AUC

・Definition
A performance measurement for classification problems at various threshold settings. AUC measures the entire two-dimensional area underneath the ROC curve.
・Formula
There is no single formula for ROC-AUC as it involves plotting the True Positive Rate (TPR) against the False Positive Rate (FPR) at various threshold settings and calculating the area under this curve.

### 6. Mean Squared Error (MSE)

・Definition
Used for regression tasks, it measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value.
・Formula
\text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2

\hat{y}_i means predicted value.

### 7. Mean Absolute Error (MAE)

・Definition
Also used for regression, it measures the average magnitude of the errors in a set of predictions, without considering their direction.
・Formula
\text{MAE} = \frac{1}{n} \sum_{i=1}^{n} |y_i - \hat{y}_i|

### 8. R-squared (R²)

・Definition
Indicates the proportion of the variance in the dependent variable that is predictable from the independent variables.
・Formula
R^2 = 1 - \frac{\sum_{i=1}^{n} (y_i - \hat{y}_i)^2}{\sum_{i=1}^{n} (y_i - \bar{y})^2}
Where \bar{y} is the mean of the actual values.

These metrics cover both classification and regression tasks and help in evaluating the performance of machine learning models effectively.

#### Option

how to make custom metrics in tensorflow:
loss function requires 2 augments (label, pred) and return value of loss.

import tensorflow as tf

# Custom loss function
def custom_loss(y_true, y_pred):
loss = tf.reduce_mean(tf.square(y_true - y_pred))
return loss

# Compiling a model with the custom loss function