Hardware Used
By default, inference code is run on Nvidia tesla-t4 GPUs, so make sure to set the PyTorch device to use CUDA. For custom hardware requirements, reach out to Scale support.
Updated about 2 years ago
By default, inference code is run on Nvidia tesla-t4 GPUs, so make sure to set the PyTorch device to use CUDA. For custom hardware requirements, reach out to Scale support.
Updated about 2 years ago