Debugging DepthAI pipeline

Currently, tools for debugging the DepthAI pipeline are limited. We plan on creating a software that would track all messages and queues, which would allow users to debug a “frozen” pipeline much easier, which is usually caused by a filled up blocking queue.

DepthAI debugging level

You can enable debugging by changing the debugging level. It’s set to warning by default.

Level

Logging

critical

Only a critical error that stops/crashes the program.

error

Errors will not stop the program, but won’t complete the action. Examples:
  • When ImageManip cropping ROI was out of bounds, error will get printed and the cropping won’t take place

  • When NeuralNetwork gets a frame whose shape (width/heigth/channel) isn’t that of the .blob

warn

Warnings are printed in cases where user action could improve certain behavior/fix it. Example:
  • When API changes, the old API style will be deprecated and warning will be shown to the user.

info

Will print information about CPU/RAM consumption, temperature, CMX slices and SHAVE core allocation.

debug

Useful especially on starting and stopping the pipeline. Debug will print:
  • Information about device initialization eg. Pipeline JSON, firmware/bootloader/OpenVINO version

  • How device/XLink is being closed/disposed

trace

Trace will print out a Message whenever one is received from the device.

Debugging can be enabled either in code:

with dai.Device() as device: # Initialize device
    # Set debugging level
    device.setLogLevel(dai.LogLevel.DEBUG)
    device.setLogOutputLevel(dai.LogLevel.DEBUG)

Where setLogLevel sets verbosity which filters messages that get sent from the device to the host and setLogOutputLevel sets verbosity which filters messages that get printed on the host (stdout). This difference allows to capture the log messages internally and not print them to stdout, and use those to eg. display them somewhere else or analyze them.

You can also enable debugging using an environmental variable DEPTHAI_LEVEL:

DEPTHAI_LEVEL=debug python3 script.py
$env:DEPTHAI_LEVEL='debug'
python3 script.py

# Turn debugging off afterwards
Remove-Item Env:\DEPTHAI_LEVEL
set DEPTHAI_LEVEL=debug
python3 script.py

# Turn debugging off afterwards
set DEPTHAI_LEVEL=

Script node logging

Currently, the best way to debug a behaviour inside the Script node, is to use node.warn('') logging capability. This will send the warning back to the host and it will get printed to the user. Users can also print values, such as frame sequence numbers, which would be valuable when debugging on-device frame-syncing logic.

script = pipeline.create(dai.node.Script)
script.setScript("""
    buf = NNData(13)
    buf.setLayer("fp16", [1.0, 1.2, 3.9, 5.5])
    buf.setLayer("uint8", [6, 9, 4, 2, 0])
    # Logging
    node.warn(f"Names of layers: {buf.getAllLayerNames()}")
    node.warn(f"Number of layers: {len(buf.getAllLayerNames())}")
    node.warn(f"FP16 values: {buf.getLayerFp16('fp16')}")
    node.warn(f"UINT8 values: {buf.getLayerUInt8('uint8')}")
""")

Code above will print the following values to the user:

[Script(0)] [warning] Names of layers: ['fp16', 'uint8']
[Script(0)] [warning] Number of layers: 2
[Script(0)] [warning] FP16 values: [1.2001953125, 1.2001953125, 3.900390625, 5.5]
[Script(0)] [warning] UINT8 values: [6, 9, 4, 2, 0]

Resource Debugging

By enabling info log level (or lower), depthai will print usage of hardware resources, specifically SHAVE core and CMX memory usage:

NeuralNetwork allocated resources: shaves: [0-11] cmx slices: [0-11] # 12 SHAVES/CMXs allocated to NN
ColorCamera allocated resources: no shaves; cmx slices: [13-15] # 3 CMXs allocated to Color an Mono cameras (ISP)
MonoCamera allocated resources: no shaves; cmx slices: [13-15]
StereoDepth allocated resources: shaves: [12-12] cmx slices: [12-12] # StereoDepth node consumes 1 CMX and 1 SHAVE core
ImageManip allocated resources: shaves: [15-15] no cmx slices. # ImageManip node(s) consume 1 SHAVE core
SpatialLocationCalculator allocated resources: shaves: [14-14] no cmx slices. # SLC consumes 1 SHAVE core

In total, this pipeline consumes 15 SHAVE cores and 16 CMX slices. The pipeline is running an object detection model compiled for 6 SHAVE cores.

CPU usage

When setting the DepthAI debugging level to debug (or lower), depthai will also print our CPU usage for LeonOS and LeonRT. CPU usage at 100% (or close to it) can cause many undesirable effects, such as higher frame latency, lower FPS, and in some cases even firmware crash.

Compared to OAK USB cameras, OAK PoE cameras will have increased CPU consumption, as the networking stack is running on the LeonOS core. Besides reducing pipeline (doing less processing), a good alternative is to reduce 3A FPS (ISP). This means that 3A algorithms (auto exposure, auto white balance and auto focus) won’t be run every frame, but every N frames. When updating DepthAI SDK’s camera_preview.py example (code change below), the LeonOS CPU usage decreased from 100% to ~46%:

# Without 3A FPS limit on OAK PoE camera:
Cpu Usage - LeonOS 99.99%, LeonRT: 6.91%

# Limiting 3A to 15 FPS on OAK PoE camera:
Cpu Usage - LeonOS 46.24%, LeonRT: 3.90%

Not having 100% CPU usage also drastically decreased frame latency, in the example for the script below it went from ~710 ms to ~110ms:

https://github.com/luxonis/depthai-python/assets/18037362/84ec8de8-58ce-49c7-b882-048141d284e0
  from depthai_sdk import OakCamera

  with OakCamera() as oak:
      color = oak.create_camera('color')
      left = oak.create_camera('left')
      right = oak.create_camera('right')

+     # Limiting 3A to 15 FPS
+     for node in [color.node, left.node, right.node]:
+         node.setIsp3aFps(15)

      oak.visualize([color, left, right], fps=True, scale=2/3)
      oak.start(blocking=True)

Limiting 3A FPS can be achieved by calling setIsp3aFps() function on the camera node (either ColorCamera or MonoCamera).

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.