NeuralNetwork
This node runs neural inference on input data. Any model can be executed, as long as the VPU supports all required layers. You can use .blob, superblob, or NNArchive formats, and target various platforms (RVC2, RVC3, RVC4). Models can be sourced from:RVC2 only- Open Model Zoo (200+ pre-trained models)
- DepthAI Model Zoo
.blob
, .superblob
or NNArchive
) using the model conversion guide.Building a NeuralNetwork node in Python
Use one of thebuild()
class methods to construct and link the node in a single call:Python
1import depthai as dai
2
3# 1. From a tensor input + NNArchive
4tensor_input = ... # e.g. output from another node
5nn_archive = dai.NNArchive('path/to/archive.tar.gz')
6nn = dai.node.NeuralNetwork.build(tensor_input, nn_archive)
7
8# 2. From a Camera node + NNModelDescription (+ optional fps)
9cam = pipeline.create(dai.node.ColorCamera)
10model_desc = dai.NNModelDescription(
11 model='yolov6-nano',
12 platform='' # empty to auto-detect
13)
14nn = dai.node.NeuralNetwork.build(cam, model_desc, fps=30.0)
15
16# 3. From a ReplayVideo node + NNArchive (+ optional fps)
17replay = pipeline.create(dai.node.ReplayVideo)
18replay.setSourcePath('video.mp4')
19nn = dai.node.NeuralNetwork.build(replay, nn_archive, fps=15.0)
- Download or accept a local model archive
- Validate that the archive is in NNArchive format
- Configure input frame capabilities (resolution, type, fps)
- Link the camera or tensor output directly to
nn.input
Manual instantiation
If you prefer manual setup, use:Python
1pipeline = dai.Pipeline()
2
3# Create node
4nn = pipeline.create(dai.node.NeuralNetwork)
5
6# Load a NNArchive
7nn_archive = dai.NNArchive('path/to/archive.tar.gz')
8
9# Set the NNArchive to the NN node
10nn.setNNArchive('path/to/archive.tar.gz')
Inputs and Outputs
Input | Type | Description |
---|---|---|
input | Any tensor/ImgFrame | Tensor or ImgFrame for inference |
passthrough | ImgFrame | Original frame |
Output | Type | Description |
---|---|---|
out | NNData | Inference results (layer blobs, outputs) |
Examples and Experiments
- Neural network - Create a simple pipeline with a camera and neural network node.
- Neural network multi-input - Run a neural network model that concatenates a camera frame with a static image using two input tensors.
- Neural network multi-input combined - Run a neural network model that combines two input images into a single output image.
Reference
class
dai::node::NeuralNetwork
variable
Input input
Input message with data to be inferred upon
variable
variable
Output passthrough
Passthrough message on which the inference was performed.Suitable for when input queue is set to non-blocking behavior.
variable
InputMap inputs
Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1 and waits for messages
variable
OutputMap passthroughs
Passthroughs which correspond to specified input
function
std::shared_ptr< NeuralNetwork > build(Node::Output & input, const NNArchive & nnArchive)
function
std::shared_ptr< NeuralNetwork > build(const std::shared_ptr< Camera > & input, NNModelDescription modelDesc, std::optional< float > fps)
function
std::shared_ptr< NeuralNetwork > build(const std::shared_ptr< Camera > & input, NNArchive nnArchive, std::optional< float > fps)
function
std::shared_ptr< NeuralNetwork > build(const std::shared_ptr< ReplayVideo > & input, NNModelDescription modelDesc, std::optional< float > fps)
function
std::shared_ptr< NeuralNetwork > build(const std::shared_ptr< ReplayVideo > & input, const NNArchive & nnArchive, std::optional< float > fps)
function
void setNNArchive(const NNArchive & nnArchive)
function
void setNNArchive(const NNArchive & nnArchive, int numShaves)
function
void setFromModelZoo(NNModelDescription description, bool useCached)
function
void setBlobPath(const dai::Path & path)
Load network blob into assets and use once pipeline is started.
Parameters
- Error: if file doesn't exist or isn't a valid network blob.
Parameters
- path: Path to network blob
function
void setBlob(OpenVINO::Blob blob)
Load network blob into assets and use once pipeline is started.
Parameters
- blob: Network blob
function
void setBlob(const dai::Path & path)
Same functionality as the setBlobPath(). Load network blob into assets and use once pipeline is started.
Parameters
- Error: if file doesn't exist or isn't a valid network blob.
Parameters
- path: Path to network blob
function
void setModelPath(const dai::Path & modelPath)
Load network xml and bin files into assets.
Parameters
- xmlModelPath: Path to the neural network model file.
function
void setNumPoolFrames(int numFrames)
Specifies how many frames will be available in the pool
Parameters
- numFrames: How many frames will pool have
function
void setNumInferenceThreads(int numThreads)
How many threads should the node use to run the network.
Parameters
- numThreads: Number of threads to dedicate to this node
function
void setNumNCEPerInferenceThread(int numNCEPerThread)
How many Neural Compute Engines should a single thread use for inference
Parameters
- numNCEPerThread: Number of NCE per thread
function
void setNumShavesPerInferenceThread(int numShavesPerThread)
How many Shaves should a single thread use for inference
Parameters
- numShavesPerThread: Number of shaves per thread
function
void setBackend(std::string backend)
Specifies backend to use
Parameters
- backend: String specifying backend to use
function
void setBackendProperties(std::map< std::string, std::string > properties)
Set backend properties
Parameters
- backendProperties: backend properties map
function
int getNumInferenceThreads()
How many inference threads will be used to run the network
Returns
Number of threads, 0, 1 or 2. Zero means AUTO
inline function
DeviceNodeCRTP()
inline function
DeviceNodeCRTP(const std::shared_ptr< Device > & device)
inline function
DeviceNodeCRTP(std::unique_ptr< Properties > props)
inline function
DeviceNodeCRTP(std::unique_ptr< Properties > props, bool confMode)
inline function
DeviceNodeCRTP(const std::shared_ptr< Device > & device, std::unique_ptr< Properties > props, bool confMode)