# NeuralDepth

NeuralDepth node computes depth from a stereo camera pair using a neural network instead of traditional stereo matching
algorithms. It provides an alternative to the
[StereoDepth](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/stereo_depth.md) node with its own unique
benefits. You can read more about our LENS (Luxonis Edge Neural Stereo) model that is used under the hood in the [blog
post](https://discuss.luxonis.com/blog/6553-neural-stereo-depth-estimation-with-lens).

> NeuralDepth is only supported on
> **RVC4**
> devices with
> **Luxonis OS 1.20.4 or newer**
> installed.

## How to place it

#### Python

```python
pipeline = dai.Pipeline()
neuralDepth = pipeline.create(dai.node.NeuralDepth)
```

#### C++

```cpp
dai::Pipeline pipeline;
auto neuralDepth = pipeline.create<dai::node::NeuralDepth>();
```

## Inputs and Outputs

## Model Sizes

NeuralDepth supports four model sizes, each offering a different tradeoff between resolution and performance:

| Model | Resolution | FPS |
| --- | --- | --- |
| `NEURAL_DEPTH_LARGE` | 768x480 | 10.8 |
| `NEURAL_DEPTH_MEDIUM` | 576x360 | 25.5 |
| `NEURAL_DEPTH_SMALL` (default) | 480x300 | 42.5 |
| `NEURAL_DEPTH_NANO` | 384x240 | 59.7 |

## Post-Processing

NeuralDepth provides several post-processing options to improve depth quality:

 * Confidence Threshold: Pixels with confidence below this threshold are considered invalid. Valid range is [0, 255], default is
   125.
 * Edge Threshold: Pixels with edge magnitude below this value are considered invalid. Valid range is [0, 255], default is 10.
 * Temporal Filter: Averages depth values over time to reduce noise. The filter has the following parameters:
   * enable: Boolean to enable/disable the filter (default: false)
   * alpha: Exponential moving average smoothing factor in range [0, 1]. Lower values provide more smoothing (great for static
     scenes), a value of 1.0 effectively acts as a passthrough (default: 0.4)
   * delta: Threshold for disparity change detection in range [0, 255]. Only pixels with disparity changes (current frame vs
     previous frame) below this value are filtered, otherwise filtering for that pixel is disabled (default: 100)

## Rectification

NeuralDepth includes an internal rectification step. If your input images are already rectified, you can disable it using
setRectification(false).

## Usage

#### Python

```python
import depthai as dai

pipeline = dai.Pipeline()

# Create mono cameras
monoLeft = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
monoRight = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)

# Create Outputs
leftOutput = monoLeft.requestOutput((640, 400))
rightOutput = monoRight.requestOutput((640, 400))

# Create NeuralDepth node
neuralDepth = pipeline.create(dai.node.NeuralDepth)
neuralDepth.build(leftOutput, rightOutput, dai.DeviceModelZoo.NEURAL_DEPTH_SMALL)

# Configure post-processing
neuralDepth.initialConfig.setConfidenceThreshold(125)
neuralDepth.initialConfig.setEdgeThreshold(10)

neuralDepth.initialConfig.postProcessing.temporalFilter.enable = True
neuralDepth.initialConfig.postProcessing.temporalFilter.delta = 100
neuralDepth.initialConfig.postProcessing.temporalFilter.alpha = 0.4

# Create output queue
depthQueue = neuralDepth.depth.createOutputQueue()
```

#### C++

```cpp
dai::Pipeline pipeline;

// Create mono cameras
auto monoLeft = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_B);
auto monoRight = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_C);

// Create outputs
auto leftOutput = monoLeft->requestOutput(std::make_pair(640, 400));
auto rightOutput = monoRight->requestOutput(std::make_pair(640, 400));

// Create NeuralDepth node
auto neuralDepth = pipeline.create<dai::node::NeuralDepth>();
neuralDepth->build(leftOutput, rightOutput, dai::DeviceModelZoo::NEURAL_DEPTH_SMALL);

// Configure post-processing
neuralDepth->initialConfig.setConfidenceThreshold(125);
neuralDepth->initialConfig.setEdgeThreshold(10);

neuralDepth->initialConfig.postProcessing.temporalFilter.enable = true;
neuralDepth->initialConfig.postProcessing.temporalFilter.delta = 100;
neuralDepth->initialConfig.postProcessing.temporalFilter.alpha = 0.4f;

// Create output queue
auto depthQueue = neuralDepth->depth.createOutputQueue();
```

## Examples of functionality

 * [Neural Depth Minimal](https://docs.luxonis.com/software-v3/depthai/examples/neural_depth/neural_depth_minimal.md) - Minimal
   example showing basic NeuralDepth usage with disparity output visualization.
 * [Neural Depth](https://docs.luxonis.com/software-v3/depthai/examples/neural_depth/neural_depth.md) - Demonstrates the
   NeuralDepth node with runtime configuration of confidence threshold, edge threshold, and temporal filtering.
 * [Neural Depth RGBD](https://docs.luxonis.com/software-v3/depthai/examples/neural_depth/neural_depth_rgbd.md) - Combines
   NeuralDepth with the RGBD node to generate a point cloud, viewable via remote connection.
 * [Neural Depth Align](https://docs.luxonis.com/software-v3/depthai/examples/neural_depth/neural_depth_align.md) - Demonstrates
   aligning NeuralDepth output to an RGB camera using the ImageAlign node.

## Reference

### dai::node::NeuralDepth

Kind: class

NeuralDepth node. Compute depth from left-right image pair using neural network.

#### std::shared_ptr< NeuralDepthConfig > initialConfig

Kind: variable

Initial config to use for NeuralDepth .

#### Subnode < Sync > sync

Kind: variable

#### Subnode < MessageDemux > messageDemux

Kind: variable

#### Subnode < Rectification > rectification

Kind: variable

#### Subnode < NeuralNetwork > neuralNetwork

Kind: variable

#### Input & left

Kind: variable

Input for left ImgFrame of left-right pair

#### Input & right

Kind: variable

Input for right ImgFrame of left-right pair

#### Output & rectifiedLeft

Kind: variable

Output for rectified left ImgFrame

#### Output & rectifiedRight

Kind: variable

Output for rectified right ImgFrame

#### Input inputConfig

Kind: variable

Input config to modify parameters in runtime.

#### Input nnDataInput

Kind: variable

Input NNData to parse

#### Input leftInternal

Kind: variable

Input left frame internal, used to extract frame info

#### Input rightInternal

Kind: variable

Input right frame internal, used to extract frame info

#### Output disparity

Kind: variable

Output disparity ImgFrame

#### Output depth

Kind: variable

Output depth ImgFrame

#### Output edge

Kind: variable

Output edge ImgFrame

#### Output confidence

Kind: variable

Output confidence ImgFrame

#### NeuralDepth()

Kind: function

#### NeuralDepth & setRectification(bool enable)

Kind: function

Enable or disable rectification (useful for prerectified inputs)

#### std::shared_ptr< NeuralDepth > build(Output & left, Output & right, DeviceModelZoo model)

Kind: function

#### void buildInternal()

Kind: function

Function called from within the

### Need assistance?

Head over to [Discussion Forum](https://discuss.luxonis.com/) for technical support or any other questions you might have.
