ON THIS PAGE

  • Computer Vision
  • Run your own CV functions on-device
  • Create a custom model with PyTorch
  • Create PyTorch NN module
  • Export the NN module to onnx
  • Simplify onnx model
  • Convert to OpenVINO/blob
  • Use the .blob in your pipeline
  • Kornia

Computer Vision

Our platform supports computer vision (CV) functions to be performed on the device itself. While you can't run OpenCV, you can use many of its supported functions. With DepthAI, you can:
  • Crop, rotate, warp/dewarp, mirror, flip, transform perspective, etc. with ImageManip
  • Detect edges (Sobel filter) with EdgeDetector
  • Detect and track features with FeatureTracker
  • Track objects (Kalman filter, Hungarian algorithm) with ObjectTracker Out-of-the-box support for Yolo and MobileNet object detectors.
  • Perceive stereo depth (Census Tranform, Cost Matching and Aggregation) with StereoDepth
If you would like to use any other CV functions, see below guide on how to implement and run CV functions efficiently on the device's hardware-accelerated blocks.

Run your own CV functions on-device

Demos:

Create a custom model with PyTorch

For the sake of this guide, we will create a simple model that concatenates three frames into one. This is a simple example, but you can use the same procedure to create more complex models.

TL;DR

If you are only interested in the implementation

1

Create PyTorch NN module

We first need to create a Python class that extends PyTorch's nn.Module. We can then put our NN logic into the forward function of the created class. In the example of frame concatenation, we can use torch.cat function to concatenate multiple frames:
Python
1class CatImgs(nn.Module):
2    def forward(self, img1, img2, img3):
3        return torch.cat((img1, img2, img3), 3)
For a more complex module, please refer to Harris corner detection in PyTorch demo by Kunal Tyagi.Keep in mind that VPU supports only FP16, which means that max value is 65504. When multiplying a few values you can quickly overflow if you don't properly normalize/divide values.
2

Export the NN module to onnx

Since PyTorch isn't directly supported by OpenVINO, we first need to export the model to onnx format and then to OpenVINO. PyTorch has integrated support for onnx, so exporting to onnx is as simple as:
Python
1# For 300x300 frames
2X = torch.ones((1, 3, 300, 300), dtype=torch.float32)
3torch.onnx.export(
4    CatImgs(),
5    (X, X, X), # Dummy input for shape
6    "path/to/model.onnx",
7    opset_version=12,
8    do_constant_folding=True,
9)
This will export the concatenate model into onnx format. We can visualize the created model using Netron app:
3

Simplify onnx model

When exporting the model to onnx, PyTorch isn't very efficient. It creates tons of unnecessary operations/layers which increase the size of your network (which can lead to lower FPS). That's why we recommend using onnx-simplifier, a simple python package that removes unnecessary operations/layers.
Python
1import onnx
2from onnxsim import simplify
3
4onnx_model = onnx.load("path/to/model.onnx")
5model_simplified, check = simplify(onnx_model)
6onnx.save(model_simplified, "path/to/simplified/model.onnx")
Here is an example of how significant the simplification was using the onnx-simplifier. On the left, there's a blur model (from Kornia) exported directly from PyTorch, and on the right, there's a simplified network of the same functionality:
4

Convert to OpenVINO/blob

Now that we have a (simplified) onnx model, we can convert it to OpenVINO and then to the .blob format. For additional information about converting models, see conversion guide.This would usually be done first by using OpenVINO's model optimizer to convert from onnx to IR format (.bin/.xml) and then using Compile tool to compile to .blob. But we could also use blobconverter to convert from onnx directly to .blob.Blobconverter just does both of these steps at once - without the need of installing OpenVINO. You can compile your onnx model like this:
Python
1import blobconverter
2
3blobconverter.from_onnx(
4    model="/path/to/model.onnx",
5    output_dir="/path/to/output/model.blob",
6    data_type="FP16",
7    shaves=6,
8    use_cache=False,
9    optimizer_params=[]
10)
5

Use the .blob in your pipeline

You can now use your .blob model with the NeuralNetwork node. Check depthai-experiments/custom-models to run the demo applications that use these custom models.

Kornia

Kornia, "State-of-the-art and curated Computer Vision algorithms for AI.", has a set of common computer vision algorithms implemented in PyTorch. This allows users to do something similar to:
Python
1import kornia
2
3class Model(nn.Module):
4    def forward(self, image):
5        return kornia.filters.gaussian_blur2d(image, (9, 9), (2.5, 2.5))
and use the exact same procedure as described in Create a custom model with PyTorch to achieve frame blurring, as shown below: