Converting model to MyriadX blob

To allow DepthAI to use your custom trained models, you need to convert them into a MyriadX blob file format - so that they are optimized for the best inference on MyriadX VPU processor.

There are two conversion steps that have to be taken in order to obtain a blob file:

  • Use Model Optimizer to produce OpenVINO IR representation (where IR stands for Intermediate Representation)

  • Use Model Compiler to convert IR representation into MyriadX blob

Image below (from OpenCV Courses site) shows these steps

Model Compile Steps

Below, please find instructions on how to perform these steps using different methods

Local compilation

If you want to perform model conversion and compilation, you can follow:

Using Google Colab

You can also train and convert models using Google Colab notebook. You can take a look at our Custom training page, where every tutorial contains also conversion & compilation steps performed directly inside the notebooks.

An example notebook with the compilation steps is here

Using online converter

You can also visit our online MyriadX Blob converter at http://luxonis.com:8080/, that allows to specify different OpenVINO target versions and supports conversions from TensorFlow, Caffe, OpenVINO IR and OpenVINO Model Zoo

BlobConverter Web

Using blobconverter package

For automated usage of our blobconverter tool, we have released a blobconverter PyPi package, that allows compiling MyriadX blobs both from the command line and from the Python script directly.

Install and usage instructions can be found here

Troubleshooting

When converting your model to the OpenVINO format or compiling it to a .blob, you might come across an issue. This usually means that a connection between two layers is not supported or that the layer is not supported.

Supported layers

When converting your model to OpenVINO format (.bin and .xml), you have to check if the OpenVINO supports layers that were used. Here are supported layers and their limitations for Caffee, MXNet, TensorFlow, TensorFlow 2 Keras, Kaldi, and ONNX.

After the conversion to OpenVINO, there might be a possibility that the VPU (Intels MyriadX) does not support the layer. You can find supported OpenVINO layers by the VPU here, under the Supported Layers header, in the third column (VPU).

Incorrect data types

If the compiler returns something along the lines of “check error: input #0 has type S32, but one of [FP16] is expected”, it means that you are using incorrect data types. In the case above, an INT32 layer is connected to FP16 directly. There should be a conversion in between these layers, and we can achieve that by using the OpenVINOs Convert layer between these two layers. You can do that by editing your models .xml and adding the Convert layer. You can find additional information on this discord thread.

Got questions?

We’re always happy to help with code or other questions you might have.