Calculate Model Metrics

📘

Click for Python SDK

Dataset.calculate_evaluation_metrics(
    model: Model,
    options: Dict{
        "class_agnostic": bool,
        "allowed_label_matches": dict
    }
) -> AsyncJob

This endpoint updates matches and metrics for a Model on a given Dataset (and its ground truth Annotations). This is required in order to sort by IoU, view false positives/false negatives, and view model insights.

You can add predictions from a model to a Dataset after running the calculation of the metrics. However, the calculation of metrics will have to be re-triggered for the new predictions to be matched with ground truth and appear as false positives/negatives, or for the new predictions effect on metrics to be reflected in model run insights.

During IoU calculation, bounding box Predictions are compared to ground truth Annotations using a greedy matching algorithm that matches prediction and ground truth boxes that have the highest ious first. By default the matching algorithm is class-agnostic: it will greedily create matches regardless of the class labels.

The algorithm can be tuned to classify true positives between certain classes, but not others. This is useful if the labels in your ground truth do not match the exact strings of your model predictions, or if you want to associate multiple predictions with one ground truth label, or multiple ground truth labels with one prediction. To recompute metrics based on different matching, you can re-commit the run with new request parameters.

Language
Authorization
Basic
base64
:
Click Try It! to start a request and see the response here!