ON THIS PAGE

  • Blobconverter
  • Overview
  • Conversion
  • BlobConverter Web Interface
  • BlobConverter Python
  • BlobConverter API
  • Export Example
  • Exporting the Model to ONNX
  • Convert ONNX to .blob Using BlobConverter

Blobconverter

Overview

The BlobConverter is our open-source tool that supports conversion to a MyriadX (.blob) format compatible with RVC2 and RVC3 Luxonis devices. The tool is accessible through a web interface and an API. This section guides you through all the conversion options.

Conversion

BlobConverter Web Interface

  • Go to the BlobConverter website.
  • Select the OpenVINO version you wish to utilize. We will be using the latest version supported by BlobConverter, which is currently 2022.1. For RAE and other devices with RVC3, you can simply pick RVC3. After choosing the version, indicate the model source. In our case, the ONNX Model, but it is also possible to upload your model in the IR format. Then click on Continue.
  • Upload the ONNX file by clicking on Choose file.
  • Additionally, before proceeding with the model conversion, you can customize conversion parameters by clicking on Advanced.
  • Finally, click on Convert and simply wait until the process is finished.

BlobConverter Python

We offer BlobConverter as a Python package as well (PyPI link). You can install it using pip:
Command Line
1pip install blobconverter
Now you can convert your ONNX model to a blob with the specified conversion parameters.
Python
1import blobconverter
2
3blob_path = blobconverter.from_onnx(
4    model="/path/to/model.onnx",
5    data_type="FP16",
6    shaves=6,
7    version="2022.1"
8)
You can check out more examples for conversion from different model sources here.

BlobConverter API

Alternatively, you can use the BlobConverter API, which is particularly useful for automated workflows. This can be done by making HTTP requests to the BlobConverter service with the necessary model and parameters. You can find more information on how to do this by clicking on the Use API button at the top right corner of the website.

Export Example

This guide will walk you through the process of exporting ResNet18, a widely-used deep neural network for image classification, to a .blob file for deployment on OAK devices. We will use torchvision for accessing the pre-trained version of the model.

Exporting the Model to ONNX

First, we will export the ResNet18 model from PyTorch to the ONNX format.
Python
1import torch
2import torchvision.models as models
3
4# Load the pretrained ResNet18 model from torchvision
5resnet18 = models.resnet18(pretrained=True)
6
7# Set the model to evaluation mode
8resnet18.eval()
9
10# Create a dummy input tensor matching the input shape of the model
11dummy_input = torch.randn(1, 3, 224, 224)
12
13# Export the model to an ONNX file
14torch.onnx.export(
15resnet18,
16dummy_input,
17'resnet18.onnx',
18export_params=True,
19opset_version=11,
20input_names=['input'],
21output_names=['output']
22)
Parameters Explanation:
  • export_params: This flag ensures that the trained parameters are exported along with the model structure.
  • opset_version: Specifies the ONNX version to use. While we typically use version 11 to ensure compatibility with ResNet18's requirements, higher versions could also be applicable.
  • input_names and output_names: We use these flags to name the model's input and output nodes for clarity. In our example, the input node is named "input" and the output node "output".
  • After exporting, you'll get a file named "resnet18.onnx" as defined in the third argument.

Convert ONNX to .blob Using BlobConverter

Instead of manually converting the ONNX file to OpenVINO IR and then compiling it, we'll use BlobConverter to handle both steps.
  • Go to the BlobConverter website.
  • Choose the appropriate OpenVINO version, which for this example, is 2022.1.
  • Upload the .onnx file and enter any necessary Model Optimizer parameters in the 'Advanced' settings.
  • --data_type: Set to 'FP16' for compatible precision with the VPU processor.
  • --mean_values: Set to [123.675, 116.28, 103.53]. These values correspond to the average of the red, green, and blue channels across all images in the ImageNet dataset (on which ResNet18 was trained).
  • --scale_values: Set to [58.395, 57.12, 57.375] which are the standard deviations of each channel. This scaling ensures that the range of pixel values in the input image matches the range in the training data, which is important for the model to perform correctly.
  • --reverse_input_channels: Use this flag to switch from BGR to RGB, since the Camera node outputs frames in the BGR format, and the model requires RGB images.
  • So at the end, the flags should look like this:
Command Line
1--data_type=FP16 --mean_values=[123.675,116.28,103.53] --scale_values=[58.395,57.12,57.375] --reverse_input_channels
  • Click Convert to start the conversion and then download the .blob file once the process is completed.
After following these instructions, you will get a resnet18.blob file that is ready for inference on OAK devices. The converted model will expect images in the BGR format with pixel values ranging from 0 to 255. Then these will be scaled to a range of 0 to 1 and normalized using the flags we set.