By uploading model predictions to Nucleus, you can compare your predictions to ground truth annotations and discover problems with your models or dataset.
You can also upload predictions for unannotated data to enable curation and querying workflows. This can for instance help you identify the most effective subset of unlabeled data to label next.
Prediction objects house the same information as
Annotations, and can additionally contain model confidence and a PDF across each class in the taxonomy.
Within Nucleus, models work as follows:
- Create a
Model. You can do this just once and reuse the model on multiple datasets.
- Upload them to your
- Trigger calculation of evaluation metrics (if your
Datasethas ground truth
You'll then be able to debug your models against your ground truth qualitatively with queries and visualizations, or quantitatively with metrics, plots, and other insights. You can also compare multiple models that have been run on the same dataset.
Updated about 1 year ago