Software Stack
DepthAI

ON THIS PAGE

  • Neural Network
  • Demo
  • Deploying custom model
  • Pipeline
  • Source code

Neural Network

Utilizes NeuralNetwork node to run a NN model on the device. The example uses a pre-trained yolov6-nano that it downloads from HubAI model zoo, but you could easily switch it with your own custom-trained model.To deploy a custom-trained model, you should first need to convert the model, either:

Demo

When you run this example code, it will just print the output layer shape, and it won't do any model decoding / visualization:
Command Line
1Received NN data: (1, 9, 16, 85)
2Received NN data: (1, 9, 16, 85)
3Received NN data: (1, 9, 16, 85)
4Received NN data: (1, 9, 16, 85)
5Received NN data: (1, 9, 16, 85)
6Received NN data: (1, 9, 16, 85)
Understanding the output of the Yolov6-nano:
  • Batch size 1, as it runs inference on every (single) frame
  • 9x16 grid, as YOLO divides the input image into a grid of size 9x16 cells
  • 85 (prediction) values per grid, 4 for bounding box, 1 for confidence score, and 80 for class probabilities (as it was trained on 80-class dataset - COCO)
For this NN architecture we could just use the DetectionNetwork node, as it does both inference and decoding on the device (see Detection network example for more info).

Deploying custom model

If you're using .dlc, you can deploy it to OAK4 by editing NeuralNetwork example with the following snippet:
Python
1nn = pipeline.create(dai.node.NeuralNetwork)
2nn.setModelPath('my_model.dlc')
3nn.setBackend("snpe") # Specify SNPE NN backend. This usually gets set under the hood
4# Specify SNPE (RVC4) specific settings, like DSP runtime and NN performance profile
5nn.setBackendProperties({"runtime": "dsp", "performance_profile": "default"})
Or, if you're using archive.tar.xz, you can edit that example with this snippet:
Python
1cam = pipeline.create(dai.node.Camera).build(socket)
2# If your nn model requires 640x640 input size (BGR):
3cam_out = cam.requestOutput((640, 640), dai.ImgFrame.Type.BGR888p)
4
5nn_archive = dai.NNArchive('./my_nn_archive.tar.xz')
6nn = pipeline.create(dai.node.NeuralNetwork).build(cam_out, nn_archive)
This example requires the DepthAI v3 API, see installation instructions.

Pipeline

Source code

Python
C++

Python

Python
GitHub
1#!/usr/bin/env python3
2import cv2
3import depthai as dai
4import numpy as np
5import time
6
7# Create pipeline
8with dai.Pipeline() as pipeline:
9    cameraNode = pipeline.create(dai.node.Camera).build()
10    # Longer form - useful in case of a local NNArchive
11    # modelDescription = dai.NNModelDescription("yolov6-nano", platform=pipeline.getDefaultDevice().getPlatformAsString())
12    # archive = dai.NNArchive(dai.getModelFromZoo(modelDescription))
13    # neuralNetwork = pipeline.create(dai.node.NeuralNetwork).build(cameraNode, archive)
14    neuralNetwork = pipeline.create(dai.node.NeuralNetwork).build(cameraNode, dai.NNModelDescription("yolov6-nano"))
15
16    qNNData = neuralNetwork.out.createOutputQueue()
17
18    pipeline.start()
19
20
21    while pipeline.isRunning():
22        inNNData: dai.NNData = qNNData.get()
23        tensor = inNNData.getFirstTensor()
24        assert(isinstance(tensor, np.ndarray))
25        print(f"Received NN data: {tensor.shape}")

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.