Find Inaccurate Annotations
Instances with high prediction confidence & mismatched annotation
You can use Nucleus to debug & find inaccurate annotations in your datasets using high confidence predictions from your models.
Pre-reqs: Dataset, Annotations, Predictions
- Open a dataset with annotations & predictions uploaded
- Go to the objects tab and open the search options sidebar
- Under the model filter select the model you want to use
- Under the object filter select false positives i.e. (prediction & annotation mismatch)
- Set the IOU slider range from 0.5 to 1 to get pred & annotations with a large overlap
- Sort the results by confidence in descending order to get high confidence predictions
Among the final results you can see that many of the predictions are clearly right. However, these are classified as false positives because the annotator applied the wrong label. You can compile these high confidence predictions in a slice to send to annotation again.
Updated about 1 year ago