Calculate Model Metrics

šŸ“˜

Click for Python SDK

Dataset.calculate_evaluation_metrics(
    model: Model,
    options: Dict{
        "class_agnostic": bool,
        "allowed_label_matches": dict
    }
) -> AsyncJob

This endpoint updates matches and metrics for a Model on a given Dataset (and its ground truth Annotations). This is required in order to sort by IoU, view false positives/false negatives, and view model insights.

You can add predictions from a model to a Dataset after running the calculation of the metrics. However, the calculation of metrics will have to be re-triggered for the new predictions to be matched with ground truth and appear as false positives/negatives, or for the new predictions effect on metrics to be reflected in model run insights.

During IoU calculation, bounding box Predictions are compared to ground truth Annotations using a greedy matching algorithm that matches prediction and ground truth boxes that have the highest ious first. By default the matching algorithm is class-agnostic: it will greedily create matches regardless of the class labels.

The algorithm can be tuned to classify true positives between certain classes, but not others. This is useful if the labels in your ground truth do not match the exact strings of your model predictions, or if you want to associate multiple predictions with one ground truth label, or multiple ground truth labels with one prediction. To recompute metrics based on different matching, you can re-commit the run with new request parameters.

Path Params
string
required

The Scale-generated ID of the Dataset.

string
required

The Scale-generated ID of the Model.

Body Params
allowed_label_matches
array of objects

Optional list of AllowedMatch objects to specify allowed matches for ground truth Annotations and Model Predictions.

allowed_label_matches
Responses

Language
Credentials
Basic
base64
:
LoadingLoading…
Response
Click Try It! to start a request and see the response here! Or choose an example:
application/json