NeuralNetwork

This node runs neural inference on input data. Any OpenVINO neural networks can be run using this node, as long as the VPU supports all layers. This allows you to pick from 200+ pre-trained model from Open Model Zoo and DepthAI Model Zoo and directly run it on the OAK device.

Neural network has to be in .blob format to be compatible with the VPU. Instructions on how to compile your neural network (NN) to .blob can be found here.

How to place it

pipeline = dai.Pipeline()
nn = pipeline.create(dai.node.NeuralNetwork)
dai::Pipeline pipeline;
auto nn = pipeline.create<dai::node::NeuralNetwork>();

Inputs and Outputs

            ┌───────────────────┐
            │                   │       out
            │                   ├───────────►
            │                   │
            │   NeuralNetwork   │
input       │                   │ passthrough
───────────►│-------------------├───────────►
            │                   │
            └───────────────────┘

Message types

Passthrough mechanism

The passthrough mechanism is very useful when a node specifies its input to be non-blocking, where messages can be overwritten. There we don’t know on which message the node performed its operation (eg NN, was inference done on frame 25 or skipped 25 and performed inference on 26). At the same time means that if: xlink and host input queues are blocking, and we receive both say passthrough and output we can do a blocking get on both of those queues and be sure to always get matching frames. They might not arrive at the same time, but both of them will arrive, and be in queue in correct spot to be taken out together.

Usage

pipeline = dai.Pipeline()
nn = pipeline.create(dai.node.NeuralNetwork)
nn.setBlobPath(bbBlobPath)
cam.out.link(nn.input)

# Send NN out to the host via XLink
nnXout = pipeline.create(dai.node.XLinkOut)
nnXout.setStreamName("nn")
nn.out.link(nnXout.input)

with dai.Device(pipeline) as device:
  qNn = device.getOutputQueue("nn")

  nnData = qNn.get() # Blocking

  # NN can output from multiple layers. Print all layer names:
  print(nnData.getAllLayerNames())

  # Get layer named "Layer1_FP16" as FP16
  layer1Data = nnData.getLayerFp16("Layer1_FP16")

  # You can now decode the output of your NN
dai::Pipeline pipeline;
auto nn = pipeline.create<dai::node::NeuralNetwork>();
nn->setBlobPath(bbBlobPath);
cam->out.link(nn->input);

// Send NN out to the host via XLink
auto nnXout = pipeline.create<dai::node::XLinkOut>();
nnXout->setStreamName("nn");
nn->out.link(nnXout->input);

dai::Device device(pipeline);
// Start the pipeline
device.startPipeline();

auto qNn = device.getOutputQueue("nn");

auto nnData = qNn->get<dai::NNData>(); // Blocking

// NN can output from multiple layers. Print all layer names:
cout << nnData->getAllLayerNames();

// Get layer named "Layer1_FP16" as FP16
auto layer1Data = nnData->getLayerFp16("Layer1_FP16");

// You can now decode the output of your NN

Reference

class depthai.node.NeuralNetwork

NeuralNetwork node. Runs a neural inference on input data.

class Connection

Connection between an Input and Output

class Id

Node identificator. Unique for every node on a single Pipeline

Properties

alias of depthai.NeuralNetworkProperties

getAssetManager(*args, **kwargs)

Overloaded function.

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

Get node AssetManager as a const reference

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

Get node AssetManager as a const reference

getInputRefs(*args, **kwargs)

Overloaded function.

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

Retrieves reference to node inputs

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

Retrieves reference to node inputs

getInputs(self: depthai.Node) → List[depthai.Node.Input]

Retrieves all nodes inputs

getName(self: depthai.Node)str

Retrieves nodes name

getNumInferenceThreads(self: depthai.node.NeuralNetwork)int

How many inference threads will be used to run the network

Returns

Number of threads, 0, 1 or 2. Zero means AUTO

getOutputRefs(*args, **kwargs)

Overloaded function.

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

Retrieves reference to node outputs

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

Retrieves reference to node outputs

getOutputs(self: depthai.Node) → List[depthai.Node.Output]

Retrieves all nodes outputs

getParentPipeline(*args, **kwargs)

Overloaded function.

  1. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

  2. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

property id

Id of node

property input

Input message with data to be inferred upon Default queue is blocking with size 5

property inputs

Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1 and waits for messages

property out

Outputs NNData message that carries inference results

property passthrough

Passthrough message on which the inference was performed.

Suitable for when input queue is set to non-blocking behavior.

property passthroughs

Passthroughs which correspond to specified input

setBlob(*args, **kwargs)

Overloaded function.

  1. setBlob(self: depthai.node.NeuralNetwork, blob: depthai.OpenVINO.Blob) -> None

Load network blob into assets and use once pipeline is started.

Parameter blob:

Network blob

  1. setBlob(self: depthai.node.NeuralNetwork, path: Path) -> None

Same functionality as the setBlobPath(). Load network blob into assets and use once pipeline is started.

Throws:

Error if file doesn’t exist or isn’t a valid network blob.

Parameter path:

Path to network blob

setBlobPath(self: depthai.node.NeuralNetwork, path: Path)None

Load network blob into assets and use once pipeline is started.

Throws:

Error if file doesn’t exist or isn’t a valid network blob.

Parameter path:

Path to network blob

setNumInferenceThreads(self: depthai.node.NeuralNetwork, numThreads: int)None

How many threads should the node use to run the network.

Parameter numThreads:

Number of threads to dedicate to this node

setNumNCEPerInferenceThread(self: depthai.node.NeuralNetwork, numNCEPerThread: int)None

How many Neural Compute Engines should a single thread use for inference

Parameter numNCEPerThread:

Number of NCE per thread

setNumPoolFrames(self: depthai.node.NeuralNetwork, numFrames: int)None

Specifies how many frames will be available in the pool

Parameter numFrames:

How many frames will pool have

class dai::node::NeuralNetwork : public dai::NodeCRTP<Node, NeuralNetwork, NeuralNetworkProperties>

NeuralNetwork node. Runs a neural inference on input data.

Public Functions

NeuralNetwork(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)
NeuralNetwork(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId, std::unique_ptr<Properties> props)
void setBlobPath(const dai::Path &path)

Load network blob into assets and use once pipeline is started.

Exceptions
  • Error: if file doesn’t exist or isn’t a valid network blob.

Parameters
  • path: Path to network blob

void setBlob(OpenVINO::Blob blob)

Load network blob into assets and use once pipeline is started.

Parameters
  • blob: Network blob

void setBlob(const dai::Path &path)

Same functionality as the setBlobPath(). Load network blob into assets and use once pipeline is started.

Exceptions
  • Error: if file doesn’t exist or isn’t a valid network blob.

Parameters
  • path: Path to network blob

void setNumPoolFrames(int numFrames)

Specifies how many frames will be available in the pool

Parameters
  • numFrames: How many frames will pool have

void setNumInferenceThreads(int numThreads)

How many threads should the node use to run the network.

Parameters
  • numThreads: Number of threads to dedicate to this node

void setNumNCEPerInferenceThread(int numNCEPerThread)

How many Neural Compute Engines should a single thread use for inference

Parameters
  • numNCEPerThread: Number of NCE per thread

int getNumInferenceThreads()

How many inference threads will be used to run the network

Return

Number of threads, 0, 1 or 2. Zero means AUTO

Public Members

Input input = {*this, "in", Input::Type::SReceiver, true, 5, true, {{DatatypeEnum::Buffer, true}}}

Input message with data to be inferred upon Default queue is blocking with size 5

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::NNData, false}}}

Outputs NNData message that carries inference results

Output passthrough = {*this, "passthrough", Output::Type::MSender, {{DatatypeEnum::Buffer, true}}}

Passthrough message on which the inference was performed.

Suitable for when input queue is set to non-blocking behavior.

InputMap inputs

Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1 and waits for messages

OutputMap passthroughs

Passthroughs which correspond to specified input

Public Static Attributes

static constexpr const char *NAME = "NeuralNetwork"

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.