Models in Nucleus
Models
in Nucleus represent actual models in your inference pipeline. There are two ways to create models in Nucleus:
- Shell Models: These are empty model i.e. no associated model artifacts. Shell models can be used to upload model predictions into Nucleus. Therefore, these are most relevant if you are running inference on your end and only want to have the inference results available in Nucleus.
- Hosted Models: These are real models with associated artifacts e.g. a trained tensorflow model. Hosted models are used when you want to host your models and run inference over a chosen dataset in Nucleus. The output predictions are also then automatically associated with the corresponding hosted Model.
Shell Model
When creating a Shell Model
, you will need to provide a name and a unique reference ID for the model. You can also attach arbitrary metadata, e.g. timestamp of creation in Nucleus.
import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")
# create new model
model = client.create_model(
name="My Detection Model v7",
reference_id="detection-model-v7",
metadata={"timestamp": 1647998396}
)
Each Prediction
in Nucleus is associated with exactly one Model
and one Dataset
. Under the hood, Nucleus defines a "model run" as the linkage between a Model
and Dataset
based on cross-referenced Predictions
. Functionally, when interacting with the Nucleus API through the Python client, you can interchangeably use model_id
(starts with prj_
) and model_run_id
(starts with run_
); these can both be found in the Models page in the dashboard, and in the dashboard URL when filtering/displaying specific models.
If you already have existing Models
in Nucleus, you can list them all or retrieve a certain model by its ID.
import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")
# list all models (reconstructs each Model object)
print(client.models)
# retrieve model by ID
model = client.get_model("prj_123foobar") # model_run_id, e.g. run_456foobaz also works
Model tags
In order to better structure and search your models you can add user defined tags to them. This especially becomes handy when working with models for different applications, sensor modalities or just different models versions.
Tags are passed as a list of strings and can also be easily removed through our SDK.
import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")
model = client.get_model("prj_123foobar") # model_run_id, e.g. run_456foobaz also works
# add user defined tags to a specific model
model.add_tags(["object_detection", "yolo", "version:1.0.0"])
# list all tags of a model
print(model.tags)
# remove tags from a model
model.remove_tags(["yolo"])
Hosted Model
Nucleus allows uploading user-made models for inference runs, including ones trained with popular ML libraries such as PyTorch and Tensorflow. This guide outlines how to upload a new model through the Nucleus client library and run it on any dataset or slice in the UI.
The representation of model code in Nucleus is called a model bundle. Currently, a bundle can only be attached to a Nucleus model on the model’s creation, so bundles can’t be added to existing Nucleus models.
Read more here.
Scale Model Zoo
You can also run an off-the-shelf model from Scale! Check out our guide on how to Run Inference with Standard Models.
Updated about 2 years ago