# Pipeline Debugging

You can enable logging by changing the debugging level. It's set to warning by default, but more verbose levels can be set to help
debug issues. The following levels are available:

| *Debug Level* | Information |
| --- | --- |
| `critical` | Only a critical error that stops/crashes the program. |
| `error` | Errors will not stop the program, but won't complete the action. Examples: When ImageManip cropping ROI was out of
bounds, error will get printed and the cropping won't take place. When NeuralNetwork gets a frame whose shape
(width/heigth/channel) isn't that of the `.blob`. |
| `warn` | Warnings are printed in cases where user action could improve certain behavior/fix it. Example: When API changes, the
old API style will be deprecated and warning will be shown to the user. |
| `info` | Will print information about CPU/RAM consumption, temperature, CMX slices and SHAVE core allocation. |
| `debug` | Useful especially on starting and stopping the pipeline. Debug will print: Information about device initialization eg.
Pipeline JSON, firmware/bootloader/OpenVINO version. How device/XLink is being closed/disposed. |
| `trace` | Trace will print out a Message whenever one is received from the device. |

Debugging can be enabled either inside the code (via API), or via environmental variables.

> Resource debugging in only available when setting the debug level using environmental variable
> `DEPTHAI_LEVEL`
> . It's
> **not**
> available when setting the debug level in code.

### Debugging with API

```python
with dai.Device() as device: # Initialize device
    # Set debugging level
    device.setLogLevel(dai.LogLevel.DEBUG)
    device.setLogOutputLevel(dai.LogLevel.DEBUG)
```

Where setLogLevel sets verbosity which filters messages that get sent from the device to the host and setLogOutputLevel sets
verbosity which filters messages that get printed on the host (stdout). This difference allows us to capture the log messages
internally and not print them to stdout, and use those to eg. display them somewhere else or analyze them.

### Debugging with environmental variable DEPTHAI_LEVEL

Using an environment variable to set the debugging level, rather than configuring it directly in code, provides additional
detailed information. This includes metrics such as CMX and SHAVE usage, and the time taken by each node in the pipeline to
process a single frame.

Example of a log message for [RGB Preview](https://docs.luxonis.com/software-v3/depthai/examples/rgb_preview.md) in INFO mode:

```bash
[184430102189660F00] [2.1] [0.675] [system] [info] DepthCamera allocated resources: shaves: [0-12] no cmx slices.
[184430102189660F00] [2.1] [0.675] [system] [info] SIPP (Signal Image Processing Pipeline) internal buffer size '18432'B, DMA buffer size: '16384'B
[184430102189660F00] [2.1] [0.711] [system] [info] ImageManip internal buffer size '285440'B, shave buffer size '34816'B
[184430102189660F00] [2.1] [0.711] [system] [info] ColorCamera allocated resources: no shaves; cmx slices: [13-15]
ImageManip allocated resources: shaves: [15-15] no cmx slices.
```

Example of a log message for [Stereo Depth](https://docs.luxonis.com/software-v3/depthai/examples/stereo_depth/stereo_depth.md) in
TRACE mode:

```bash
[19443010513F4D1300] [0.1.2] [2.014] [MonoCamera(0)] [trace] Mono ISP took '0.866377' ms.
[19443010513F4D1300] [0.1.2] [2.016] [MonoCamera(1)] [trace] Mono ISP took '1.272838' ms.
[19443010513F4D1300] [0.1.2] [2.019] [StereoDepth(2)] [trace] Stereo rectification took '2.661958' ms.
[19443010513F4D1300] [0.1.2] [2.027] [StereoDepth(2)] [trace] Stereo took '7.144515' ms.
[19443010513F4D1300] [0.1.2] [2.028] [StereoDepth(2)] [trace] 'Median' pipeline took '0.772257' ms.
[19443010513F4D1300] [0.1.2] [2.028] [StereoDepth(2)] [trace] Stereo post processing (total) took '0.810216' ms.
[2024-05-16 14:27:51.294] [depthai] [trace] Received message from device (disparity) - parsing time: 11µs, data size: 256000
```

#### Linux/MacOS

```bash
DEPTHAI_LEVEL=debug python3 script.py
```

#### Windows PowerShell

```bash
$env:DEPTHAI_LEVEL='debug'
python3 script.py

# Turn debugging off afterwards
Remove-Item Env:DEPTHAI_LEVEL
```

#### Windows CMD

```bash
set DEPTHAI_LEVEL=debug
python3 script.py

# Turn debugging off afterwards
set DEPTHAI_LEVEL=
```

## CPU usage

When setting the Debug Level to debug (or lower), depthai will also print our CPU usage for LeonOS and LeonRT. CPU usage at 100%
(or close to it) can cause many undesirable effects, such as higher frame latency, lower FPS, and in some cases even firmware
crash.

Compared to OAK USB cameras, OAK PoE cameras will have increased CPU consumption, as the networking stack is running on the LeonOS
core. The easiest way to reduce CPU consumtpion is to reduce the pipeline complexity, or to reduce FPS of the cameras, as they are
the main consumers of CPU (running 3A algorithms).

Not having 100% CPU usage also drastically decreased frame latency, in the example for the script below it went from ~710 ms to
~110ms:

## RAM usage

All RVC2-based OAK devices have 512 MiB (4 Gbit) on-board RAM, which is used for firmware (about 15MB), assets (a few KB up to
100MB, eg. NN models), and other resources, such as message pools where messages are stored.

If you enable info (see [Pipeline Debugging](#Pipeline%2520Debugging) section), you will see how RAM is used:

```bash
[info] Memory Usage - DDR: 41.23 / 358.82 MiB, CMX: 2.17 / 2.50 MiB,
LeonOS Heap: 20.70 / 78.63 MiB, LeonRT Heap: 3.51 / 23.84 MiB
```

As you can see, RAM is split between the two LEON (CPU) cores, CMX (used for image manipulation), and DDR (everything else). If
DDR usage is close to the max (in this example, 358 MiB), you might get an error such as:

```bash
[error] Neural network executor '0' out of '2' error: OUT_OF_MEMORY
```

This means you should decrease RAM consumption, and we will take a look at a few ways on how to do this.

### Decreasing RAM consumption

 * Pool sizes - some nodes (including
   [ColorCamera](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/color_camera.md),
   [PointCloud](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/pointcloud.md),
   [ImageManip](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/image_manip.md),
   [EdgeDetector](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/edge_detector.md),
   [MonoCamera](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/mono_camera.md),
   [StereoDepth](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/stereo_depth.md),
   [VideoEncoder](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/video_encoder.md),
   [Warp](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/warp.md)) have configurable pool sizes ([Pool docs
   here](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes.md)). This means that the user can configure how
   many messages will be stored in the pool (RAM). If you are hitting RAM usage limits, the easiest way to reduce it is by
   decreasing the pool size. API for configuring pool size is node.setNumFramesPool(num_frames_pool). For ColorCamera, user needs
   to specify 5 pool sizes colorCam.setNumFramesPool(raw, isp, preview, video, still). Note that decreasing pool size below 2 will
   cause pipeline issues.

 * Large frames If we change the resolution from 1080P to 4K in the [RGB
   video](https://docs.luxonis.com/software-v3/depthai/examples/rgb_video.md) example, DDR usage will increase from 41 MiB to 161
   MiB. That's because 4K uses 4x more RAM compared to 1080P frame. An easy way to decrease RAM consumption is to use lower
   resolution / smaller frames.

 * VideoEncoder [VideoEncoder](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/video_encoder.md) nodes can
   consume a lot of RAM, especially at high resolutions. For example, [RGB
   Encoding](https://docs.luxonis.com/software-v3/depthai/examples/rgb_encoding.md) example consumes 259 MiB. If we change the
   resolution from 4K to 1080P, we decrease DDR consumption to only 65 MiB.

 * ImageManip Each [ImageManip](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/image_manip.md) node will
   have its own (output) pool of 4 frames (by default), so having multiple ImageManips that are manipulating high resolution
   frames will consume a lot of DDR RAM. By default, each pool "spot" will consume 1 MiB, even if it's a small 300x300 RGB frame
   (which is 270kB). Specifying the output frame size can therefore decrease the RAM as well, eg. for a 300x300 RGB frame, you can
   set manip.setMaxOutputFrameSize(270000).

 * XLinkIn Just like [ImageManip](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/image_manip.md), each
   XLinkIn node has its own message pool as well. By default, each XLinkIn will consume 40 MiB, as each pool "spot" has 5 MiB
   reserved, and there are 8 "spots" in the pool. If you are sending 300x300 RGB frames from the host to the device, you can set
   xin.setMaxDataSize(270000), and also limit number of messages per pool xin.setNumFrames(4). This will decrease DDR RAM
   consumption from 40 MiB to about 1 MiB.

> If you are just sending control/config from the host, you can set
> `xin.setMaxDataSize(1)`
> , as
> [CameraControl](https://docs.luxonis.com/software-v3/depthai/depthai-components/messages/camera_control.md)
> and
> [ImageManipConfig](https://docs.luxonis.com/software-v3/depthai/depthai-components/messages/image_manip_config.md)
> only hold metadata (without any data, like
> [NNData](https://docs.luxonis.com/software-v3/depthai/depthai-components/messages/nn_data.md)
> /
> [ImgFrame](https://docs.luxonis.com/software-v3/depthai/depthai-components/messages/img_frame.md)
> /
> [Buffer](https://docs.luxonis.com/software-v3/depthai/depthai-components/messages/buffer.md)
> ).
