ON THIS PAGE

  • Inference
  • Overview
  • Installation
  • Inference Pipeline
  • Camera
  • Model and Parser(s)
  • NN Archive
  • HubAI
  • Manual Setup
  • Queue(s)
  • Single-Headed
  • Multi-Headed
  • Results
  • Examples

Inference

Overview

Models converted for RVC Platforms can be deployed on OAK devices to perform inference. The following section guides you through setting up a simple inference pipeline for a desired AI model. We utilize DepthAI to build the inference pipeline as a sequence of:The nodes of both kinds can be connected interchangably. The Built-in nodes are stable, optimized, and ensure efficient performance on Luxonis devices, while the Host nodes offer a greater flexibility and can be customized to meet specific use case. Please check out the DepthAI Nodes library for our in-house collection of Python host nodes.The inference pipeline can be defined manually, node-by-node. However, we also offer a certain degree of automation of pipeline creation based on the relevant NN Archive (for example, automatically connecting the neural network with a specific host node responsible for decoding its outputs). Please see below for more information.

Installation

Creation of an inference pipeline requires DepthAI (v3) library. Usage of our custom host nodes (e.g. for model output decoding) requires DepthAI Nodes library. You can install them using pip:
Command Line
1pip install --pre depthai --force-reinstall
2pip install depthai-nodes

Inference Pipeline

We present here a simple inference pipeline template. It consists of four main sections that we describe in more detail below:
  • Camera,
  • Model and Parser(s);
  • Queue(s);
  • Results.
Python
1import depthai as dai
2from depthai_nodes.node import ParsingNeuralNetwork
3
4model = "..." # NN Archive or HubAI model identifier
5
6# Create pipeline
7with dai.Pipeline() as pipeline:
8
9    # Camera
10    camera = pipeline.create(dai.node.Camera).build()
11
12    # Model and Parser(s)
13    nn_with_parser = pipeline.create(ParsingNeuralNetwork).build(
14        camera, model
15    )
16
17    # Queue(s)
18    parser_output_queue = nn_with_parser.out.createOutputQueue()
19
20    # Start pipeline
21    pipeline.start()
22
23    while pipeline.isRunning():
24
25        # Results
26        ...

Camera

The inference pipeline starts with the Camera node. It is the source of image frames that get inferenced on. The node can be added to a pipeline as follows:
Python
1camera_node = pipeline.create(dai.node.Camera).build()

Model and Parser(s)

Inference consists of two steps. First, the model makes predictions on the input data. Second, a postprocessing node, also known as a Parser, is utilized to process the model output(s). This step is optional and the raw model output is returned if skipped. You can find more information about the available parsers in the DepthAI Nodes library.The Model is set up using the NeuralNetwork node. DepthAI Nodes parsers can be used as standalone nodes or merged together with NeuralNetworks into ParsingNeuralNetwork nodes. The latter automatically links the model outputs with the appropriate parsers as defined in the relevant NN Archive. This abstracts away all the configuration details and is thus the preffered way of interraction with parsers. The created nodes (standalone or not) can be used the same way as standard DepthAI nodes by linking them to other nodes or define them as pipeline queues.The ParsingNeuralNetwork node can be instantiated directly from:
  • NN Archive, or
  • HubAI (which downloads NN Archive from the HubAI cloud).
When initialized, the pipeline automatically detects the platform of the connected device and sets up both the model and the relevant parser(s). Moreover, it sets up the camera node and links it to the model input.

NN Archive

You can prepare the NN Archive locally or download it from HubAI. Initialization of the ParsingNeuralNetwork node from a NN Archive is done as:
Python
1# Set up NN Archive
2nn_archive = dai.NNArchive(<path/to/NNArchiveName.tar.xz>)
3
4# Set up model (with parser(s)) and link it to camera output
5nn_with_parser = pipeline.create(ParsingNeuralNetwork).build(
6    cameraNode, nn_archive
7)

HubAI

When instantiating from HubAI, one must provide the HubAI model identifier which is a unique identifier of a model on the HubAI platform. You can find more information in the Model Upload/Download section.
Python
1# Set up the HubAI model identifier
2model = "..."
3
4# Set up model with parser(s)
5nn_with_parser = pipeline.create(ParsingNeuralNetwork).build(
6    camera_node, model
7)

Manual Setup

The Model and the Parser can also be instantiated as a standalone nodes.First, initialize them by calling the create() method on the pipeline:
Python
1from depthai_nodes.node import <ParserName>
2...
3model = pipeline.create(dai.node.NeuralNetwork)
4parser = pipeline.create(<ParserName>)
Doing so, the nodes are initialized using the default parameters. The parser node can further be configured according to your needs either:
  • at initialization, configuration can be set by passing the parameter values as arguments to the create() method: parser = pipeline.create(<ParserName>, <ParameterName>=<ParameterValue>, ...) If configuring multiple parameters, you can arrange them into a dict and pass it as an argument to the parser build() method: parser = pipeline.create(<ParserName>).build(config_dict);
  • after initialization, one can change configuration by using the setter methods: parser.<SetterMethodName>(<ParameterValue>). You can find all the setter methods available for a specific parser on the DepthAI Nodes API Reference page.
Second, set the model executable (i.e. the .blob for RVC2, or the .dlc file for RVC4):
Python
1model.setModelPath(<path/to/model_executable>)
Third, prepare the camera stream and link the standalone nodes to constitute a pipeline:
Python
1width, height = ... # model input size
2camera_stream = camera.requestOutput(size=(width, height))
3camera_stream.link(model.input)
4model.out.link(parser.input)

Queue(s)

Queues are used to obtain data from specific nodes of the pipeline. To obtain the image frame that gets input to the model, you can use the passthrough queue:
Python
1frame_queue = nn_with_parser.passthrough.createOutputQueue()
To obtain the (parsed) model output, you can use the output queue(s). The definition depends on the number of model heads:

Single-Headed

Python
1parser_output_queue = nn_with_parser.out.createOutputQueue()

Multi-Headed

Python
1head0_parser_output_queue = nn_with_parser.getOutput(0).createOutputQueue()
2head1_parser_output_queue = nn_with_parser.getOutput(1).createOutputQueue()
3...

Results

After the pipeline is started with pipeline.start(), outputs can be obtained from the defined queue(s). You can obtain the input frame and parsed model outputs as:
Python
1while pipeline.isRunning():
2
3    # Get Camera Output
4    frame_queue_output = frame_queue.get()
5    frame = frame_queue_output.getCvFrame()
6    ...
7
8    # Get Parsed Output(s)
9    parser_output = parser_output_queue.get()
10    ...
The parsed model outputs are returned as:Please read the DepthAI Nodes API reference to learn more about the relevant formats and how to utilize them for your use case.

Examples

Please consult the OAK Examples page.