ON THIS PAGE

  • On Device Programming
  • Using Script node
  • Creating custom NN models
  • Supported layers
  • Creating custom OpenCL kernel

On Device Programming

While regular (firmware) on-device development is not possible due to the closed nature of native tooling, we still expose a couple of alternative ways of running custom code:
  1. Scripting - Using Python 3.9 with Script node
  2. Creating your own NN model to run more computationally heavy features
  3. Creating custom OpenCL kernels

Using Script node

Using Script node allows you to run custom Python scripts on the device itself, which allows users greater flexibility when constructing pipelines.Script node is also very useful when using multiple neural networks in series and you need to process the output of the first one before feeding an image to the second one. Example here would be face age/gender recognition demo - the first NN would detect faces, pass detections to the Script node which would create ImageManipConfig to crop the original frame and feed the face age/gender recognition NN only the cropped face frame.For running computationally heavy features (e.g., image filters), due to performance reasons you might want to avoid using Script node and rather go with one of the two options described below.

Creating custom NN models

You can create custom models with your favorite NN library, convert the model into OpenVINO, and then compile it into the .blob. More information on this topic can be found in the Converting model to MyriadX blob documentation.Refer to the Computer Vision page to find out more.

Supported layers

When converting your model to OpenVINO's IR format (.bin and .xml), you have to check if OpenVINO supports the layers that were used. Here are the supported layers and their limitations for various frameworks:

Creating custom OpenCL kernel

Creating custom NN models has some limitations, for example, Supported layers by OpenVINO/VPU. To avoid these limitations, you could consider creating a custom OpenCL kernel and compile it for the VPU. This kernel will run on SHAVE core(s) on the VPU. One should also take into account that this option is not super user-friendly. We plan on creating a tutorial on how to develop these and run them on OAK cameras.