25. Evaluation API¶
This page documents the evaluation API. For workflows, see Evaluation how-to.
25.1 What it is for¶
The evaluation brick provides metric implementations and helpers for labels or score matrices. [1]
25.2 Examples¶
List metrics:
from modssc.evaluation import list_metrics
print(list_metrics())
Evaluate accuracy and macro F1:
import numpy as np
from modssc.evaluation import evaluate
y_true = np.array([0, 1, 1])
y_pred = np.array([0, 1, 0])
print(evaluate(y_true, y_pred, ["accuracy", "macro_f1"]))
Metrics are implemented in src/modssc/evaluation/metrics.py. [1]