Calculate Evaluation Metrics
Calculate Model Metrics
After creating a model and uploading its predictions, you'll need to call this endpoint to update matches against ground truth annotations and calculate various metrics such as IOU. This will enable sorting by metrics, filtering down to false positives or false negatives, and a number of evaluation plots and metrics present in the Insights page.
You can continue to add model predictions to a dataset even after running the calculation of the metrics. However, the calculation of metrics will have to be retriggered for the new predictions to be matched with ground truth and update sorts, false positive/negative filters, and metrics used in the Insights page.
How Nucleus matches predictions to ground truth
During IOU calculation for geometric predictions, predictions are greedily matched to ground truth by taking highest IOU pairs first. For segmentation predictions, a prediction and ground truth pair is matched if it has a IOU of more than 1% or if its a true positive. By default the matching algorithm is class-sensitive: it will treat a match as a true positive if and only if the labels are the same.
If you'd like to compute IOU by allowing associations between certain labels and predictions that don't have the same name, you can specify them using a list of
allowed_label_matches
(shown in the example below).
Note that this step will be completed automatically if you are working with CategoryPredictions
(because matching predictions to ground truth is trivial).
from nucleus import NucleusClient
client = NucleusClient("YOUR_SCALE_API_KEY")
dataset = client.get_dataset(dataset_id="YOUR_DATASET_ID")
model = client.get_model(model_id="YOUR_MODEL_ID", dataset_id="YOUR_DATASET_ID")
"""
associate car and bus bounding boxes for IOU computation,
but otherwise force associations to have the same class (default)
"""
dataset.calculate_evaluation_metrics(model, options={
"allowed_label_matches": [
{
"ground_truth_label": "car",
"model_prediction_label": "bus"
},
{
"ground_truth_label": "bus",
"model_prediction_label": "car"
}
]
})
Once your predictions' metrics have finished processing, you can check out the Objects tab or Insights page to explore, visualize, and debug your models and data!
Updated over 2 years ago
Check out some workflows based on your model predictions!