Model Conversion
Overview
To utilize the full potential of HubAI models on our devices, they need to be converted to the RVC compiled format of the platform they aim to be ran on. Below, we have prepared simple step-by-step instructions to guide the conversion on HubAI by utilizing our cloud services. For local (offline) conversion, please refer to Modelconverter tool.Conversion Guidelines
It is assumed here that the models aimed for conversion have already been uploaded to HubAI.If this step has not yet been completed, please refer to the Model Upload guidelines.

- Click the icon of the desired public or private (team-owned) model, scroll down to Model Versions, and click on the desired model version.

- Click the Convert Model button in the top right corner of the page.

- This opens up a a pop-up window listing all the RVC Platforms for which the model can be converted. Note that the options are dependent on the format of the uploaded model file. In brief, only the ONNX format allows conversion for all the platforms. Select the platform for which the model should be converted.

- Next, an additional a pop-up window titled is opened. Fill-in the empty fields with model descriptors and conversion parameters. Depending on the selected platform, you will be asked to fill some of the following parameters (some might be predefined if the Model File is a NN-Archive):
- Model Version Name: Name of the converted model (e.g. <model_slug>:<platform>).
- Disable Onnx Simplification (OPTIONAL): Select this option to disable ONNX model simplification (replacement of redundant operators with their constant outputs) during conversion.
- Mo Args (OPTIONAL): For RVC2 and RVC3 platform conversions, the OpenVINO Model Optimizer is used to convert the model to IR format. You can specify custom arguments here. In case of a conflict between custom and default arguments, custom ones will always take precedence. See the official documentation for more information.
- Compile Tool Args (OPTIONAL): For RVC2, the OpenVINO compile tool is used to compile model for inference. You can specify custom arguments here. In case of a conflict between custom and default arguments, custom ones will always take precedence. See the official documentation for more information.
- POT Target Device: Target device for POT (RVC3 only). Preferably set to VPU but set to ANY if it fails.
- Shape: The input shape of the network.
- Scale Values: A list of scale values to be used for each channel (e.g. 123.675, 116.28, and 103.53 for ImageNet).
- Mean Values: A list of mean values to be used for each channel (e.g. 58.395, 57.12, and 57.375 for ImageNet).
- Reverse Input Channels (OPTIONAL):Transform input channels order from RGB to BGR (or vice versa).
- Quantization Data: For RVC3 and RVC4 platform conversion, quantization is performed to reduce the model computational and memory costs (read more in the Conversion Concepts section). This process requires example input data to be passed to the model. There is currently no option to upload a custom quantization dataset but we offer some generic ones for you to choose from (currated sets of 1024 images taken from the Open Images V7 and forklift-1 datasets). To get the best possible quantization results, choose the one most similar to the model’s training dataset:
- Driving - Images of streets and vehicles (OIv7 classes like Vehicle, Car, Traffic light, etc.);
- Food - Images of fruit, vegetables, raw and prepared foods (OIv7 classes like Apple, Salad, Pizza, etc.);
- General - A random subset of OIv7 images representing a diverse set of objects and scenes;
- Indoors - Images of indoor spaces (OIv7 classes like Table, Chair, Fireplace, etc.);
- Random - Random-pixel images;
- Warehouse - Images of warehouse interiors (a random subset of forklift-1 images);
During conversion, preprocessing is added to the model.It is generally advised to fill in the relevant parameters (Scale Values, Mean Values, Reverse Input Channels) in a way so that the converted model expects BGR input with no additional scaling or mean shifting.The order of preprocessing operations is: (1) Reversing input channels, (2) Substracting mean values, and (3) Dividing by scale values.Keep this in mind when manipulating inputs, i.e. mean and scale should follow the order of the original color encoding of the model.
- Once done filling-in the parameters, click on the Export button. This starts the conversion process by making a new instance and sets its status marker to Pending. The conversion process will take a few minutes to complete, depending on the size and complexity of the model.

- The completion of conversion is marked by the Success status marker. The model can now be downloaded or referenced through DepthAI API (see the Inference guidelines).

Troubleshooting
Not all models can be converted for the desired platform. Please consult the Troubleshooting section on the Conversion page. On HubAI, you check the logs of the conversion job as follows:- Find the failed conversion job in the Failed Conversions section and click on it

- Download the conversion logs by clicking on the Logs button in the top right corner or inspect them directly by scrolling the Conversion process logs at the bottom of the page.

If export fails, corrections need to be introduced either on the network itself or the conversion parameters.Sometimes you might have to split the model and offload the tricky part to the CPU (this normally happens in presence of dynamic operations).Try out the onnx-modifier tool for such operations.