# NeuralNetwork

This node runs neural inference on input data. Any model can be executed, as long as the VPU supports all required layers. You can
use .blob, superblob, or NNArchive formats, and target various platforms (RVC2, RVC3, RVC4). Models can be sourced from:

 * [HubAI Model Zoo](https://models.luxonis.com/)

RVC2 only

 * [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo) (200+ pre-trained models)
 * [DepthAI Model Zoo](https://github.com/luxonis/depthai-model-zoo)

Compile your network to correct format (.blob, .superblob or NNArchive) using the [model conversion
guide](https://docs.luxonis.com/software-v3/ai-inference/conversion.md).

## Building a NeuralNetwork node in Python

Use one of the build() class methods to construct and link the node in a single call:

```python
import depthai as dai

# 1. From a tensor input + NNArchive
tensor_input = ...  # e.g. output from another node
nn_archive = dai.NNArchive('path/to/archive.tar.gz')
nn = dai.node.NeuralNetwork.build(tensor_input, nn_archive)

# 2. From a Camera node + NNModelDescription (+ optional fps)
cam = pipeline.create(dai.node.ColorCamera)
model_desc = dai.NNModelDescription(
    model='yolov6-nano',
    platform=''  # empty to auto-detect
)
nn = dai.node.NeuralNetwork.build(cam, model_desc, fps=30.0)

# 3. From a ReplayVideo node + NNArchive (+ optional fps)
replay = pipeline.create(dai.node.ReplayVideo)
replay.setSourcePath('video.mp4')
nn = dai.node.NeuralNetwork.build(replay, nn_archive, fps=15.0)
```

These methods will:

 1. Download or accept a local model archive
 2. Validate that the archive is in NNArchive format
 3. Configure input frame capabilities (resolution, type, fps)
 4. Link the camera or tensor output directly to nn.input

## Manual instantiation

If you prefer manual setup, use:

```python
pipeline = dai.Pipeline()

# Create node
nn = pipeline.create(dai.node.NeuralNetwork)

# Load a NNArchive
nn_archive = dai.NNArchive('path/to/archive.tar.gz')

# Set the NNArchive to the NN node
nn.setNNArchive('path/to/archive.tar.gz')
```

## Inputs and Outputs

| Input | Type | Description |
| --- | --- | --- |
| `input` | Any tensor/ImgFrame | Tensor or ImgFrame for inference |
| `passthrough` | ImgFrame | Original frame |

| Output | Type | Description |
| --- | --- | --- |
| `out` | NNData | Inference results (layer blobs, outputs) |

## Examples and Experiments

 * [Neural network](https://docs.luxonis.com/software-v3/depthai/examples/neural_network/neural_network.md) - Create a simple
   pipeline with a camera and neural network node.
 * [Neural network
   multi-input](https://docs.luxonis.com/software-v3/depthai/examples/neural_network/neural_network_multi_input.md) - Run a neural
   network model that concatenates a camera frame with a static image using two input tensors.
 * [Neural network multi-input
   combined](https://docs.luxonis.com/software-v3/depthai/examples/neural_network/neural_network_multi_input_combined.md) - Run a
   neural network model that combines two input images into a single output image.

## Reference

### dai::node::NeuralNetwork

Kind: class

NeuralNetwork node. Runs a neural inference on input data.

#### std::variant< NNModelDescription , NNArchive , std::string > Model

Kind: enum

#### Input input

Kind: variable

Input message with data to be inferred upon

#### Output out

Kind: variable

Outputs NNData message that carries inference results

#### Output passthrough

Kind: variable

Passthrough message on which the inference was performed. Suitable for when input queue is set to non-blocking behavior.

#### InputMap inputs

Kind: variable

Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1
and waits for messages

#### OutputMap passthroughs

Kind: variable

Passthroughs which correspond to specified input

#### ~NeuralNetwork()

Kind: function

#### std::shared_ptr< NeuralNetwork > build(Node::Output & input, const NNArchive & nnArchive)

Kind: function

Build NeuralNetwork node. Connect output to this node's input and sets up the NNArchive .

parameters: output: Output to link; nnArchive: Neural network archive return: Shared pointer to NeuralNetwork node

#### std::shared_ptr< NeuralNetwork > build(const std::shared_ptr< Camera > & input, const Model & model, std::optional< float >
fps, std::optional< dai::ImgResizeMode > resizeMode)

Kind: function

Build NeuralNetwork node. Connect Camera output to this node's input and configure the inference model.

parameters: input: Camera node; model: Neural network model description, NNArchive or HubAI model id string; fps: Desired frames
per second; resizeMode: Resize mode for input frames return: Shared pointer to NeuralNetwork node

#### std::shared_ptr< NeuralNetwork > build(const std::shared_ptr< Camera > & input, const Model & model, const ImgFrameCapability
& capability)

Kind: function

Build NeuralNetwork node. Connect Camera output to this node's input and configure the inference model.

parameters: input: Camera node; model: Neural network model description, NNArchive or HubAI model id string; capability: Camera
capabilities return: Shared pointer to NeuralNetwork node

#### std::shared_ptr< NeuralNetwork > build(const std::shared_ptr< ReplayVideo > & input, const Model & model, std::optional<
float > fps)

Kind: function

Build NeuralNetwork node. Connect ReplayVideo output to this node's input and configure the inference model.

parameters: input: ReplayVideo node; model: Neural network model description, NNArchive or HubAI model id string; fps: Desired
frames per second return: Shared pointer to NeuralNetwork node

#### std::optional< std::reference_wrapper< const NNArchive > > getNNArchive()

Kind: function

Get the archive owned by this Node .

return: constant reference to this Nodes archive

#### void setNNArchive(const NNArchive & nnArchive)

Kind: function

Set NNArchive for this Node . If the archive's type is SUPERBLOB, use default number of shaves.

parameters: nnArchive: NNArchive to set

#### void setNNArchive(const NNArchive & nnArchive, int numShaves)

Kind: function

Set NNArchive for this Node , throws if the archive's type is not SUPERBLOB.

parameters: nnArchive: NNArchive to set; numShaves: Number of shaves to use

#### void setFromModelZoo(NNModelDescription description, bool useCached)

Kind: function

Download model from zoo and set it for this Node .

parameters: description: Model description to download; useCached: Use cached model if available

#### void setBlobPath(const std::filesystem::path & path)

Kind: function

Load network blob into assets and use once pipeline is started. parameters: Error: if file doesn't exist or isn't a valid network
blob. parameters: path: Path to network blob

#### void setBlob(OpenVINO::Blob blob)

Kind: function

Load network blob into assets and use once pipeline is started. parameters: blob: Network blob

#### void setBlob(const std::filesystem::path & path)

Kind: function

Same functionality as the setBlobPath() . Load network blob into assets and use once pipeline is started. parameters: Error: if
file doesn't exist or isn't a valid network blob. parameters: path: Path to network blob

#### void setOtherModelFormat(std::vector< uint8_t > model)

Kind: function

Load network model into assets and use once pipeline is started. parameters: model: Network model

#### void setOtherModelFormat(const std::filesystem::path & path)

Kind: function

Load network model into assets and use once pipeline is started. parameters: Error: if file doesn't exist or isn't a valid network
model. parameters: path: Path to the network model

#### void setModelPath(const std::filesystem::path & modelPath)

Kind: function

Load network xml and bin files into assets. parameters: xmlModelPath: Path to the neural network model file.

#### void setNumPoolFrames(int numFrames)

Kind: function

Specifies how many frames will be available in the pool parameters: numFrames: How many frames will pool have

#### void setNumInferenceThreads(int numThreads)

Kind: function

How many threads should the node use to run the network. parameters: numThreads: Number of threads to dedicate to this node

#### void setNumNCEPerInferenceThread(int numNCEPerThread)

Kind: function

How many Neural Compute Engines should a single thread use for inference parameters: numNCEPerThread: Number of NCE per thread

#### void setNumShavesPerInferenceThread(int numShavesPerThread)

Kind: function

How many Shaves should a single thread use for inference parameters: numShavesPerThread: Number of shaves per thread

#### void setBackend(std::string backend)

Kind: function

Specifies backend to use parameters: backend: String specifying backend to use

#### void setBackendProperties(std::map< std::string, std::string > properties)

Kind: function

Set backend properties parameters: backendProperties: backend properties map

#### int getNumInferenceThreads()

Kind: function

How many inference threads will be used to run the network return: Number of threads, 0, 1 or 2. Zero means AUTO

#### void setModelFromDeviceZoo(DeviceModelZoo model)

Kind: function

Set model from Device Model Zoo parameters: model: DeviceModelZoo model enum note: Only applicable for RVC4 devices with OS 1.20.5
or higher

#### DeviceNodeCRTP()

Kind: function

#### DeviceNodeCRTP(const std::shared_ptr< Device > & device)

Kind: function

#### DeviceNodeCRTP(std::unique_ptr< Properties > props)

Kind: function

#### DeviceNodeCRTP(std::unique_ptr< Properties > props, bool confMode)

Kind: function

#### DeviceNodeCRTP(const std::shared_ptr< Device > & device, std::unique_ptr< Properties > props, bool confMode)

Kind: function

### Need assistance?

Head over to [Discussion Forum](https://discuss.luxonis.com/) for technical support or any other questions you might have.
