Software Stack
DepthAI
  • DepthAI Components
    • AprilTags
    • Benchmark
    • Camera
    • Calibration
    • DetectionNetwork
    • EdgeDetector
    • Events
    • FeatureTracker
    • HostNodes
    • ImageAlign
    • ImageManip
    • IMU
    • Misc
    • Model Zoo
    • NeuralNetwork
    • ObjectTracker
    • RecordReplay
    • RGBD
    • Script
    • SpatialDetectionNetwork
    • SpatialLocationCalculator
    • StereoDepth
    • Sync
    • SystemLogger
    • VideoEncoder
    • Visualizer
    • Warp
    • RVC2-specific
  • Advanced Tutorials
  • API Reference
  • Tools

ON THIS PAGE

  • Benchmark Simple
  • Demo
  • Pipeline
  • Source code

Benchmark Simple

This simple example measures pipeline latency by connecting a BenchmarkOut node to a BenchmarkIn node.Since nodes only pass message pointers (no data copying), the latency is very low, typically in the order of microseconds.

Demo

Command Line
1Benchmark $ python3.9 benchmark_simple.py
2[2025-03-21 14:44:42.051] [ThreadedNode] [trace] Frame latency: 5.041e-06 s
3[2025-03-21 14:44:42.086] [ThreadedNode] [trace] Frame latency: 1.0166e-05 s
4[2025-03-21 14:44:42.122] [ThreadedNode] [trace] Frame latency: 6.25e-06 s
5[2025-03-21 14:44:42.156] [ThreadedNode] [trace] Frame latency: 6.459e-06 s
6[2025-03-21 14:44:42.185] [ThreadedNode] [trace] Frame latency: 7.875e-06 s
7[2025-03-21 14:44:42.221] [ThreadedNode] [trace] Frame latency: 1.1542e-05 s
This example requires the DepthAI v3 API, see installation instructions.

Pipeline

Source code

Python
C++

Python

Python
GitHub
1import depthai as dai
2
3device= dai.Device()
4
5with dai.Pipeline(device) as p:
6    # Create a BenchmarkOut node
7    # It will listen on the input to get the first message and then send it out at a specified rate
8    # The node sends the same message out (creates new pointers), not deep copies.
9    benchmarkOut = p.create(dai.node.BenchmarkOut)
10    benchmarkOut.setRunOnHost(True) # The node can run on host or on device
11    benchmarkOut.setFps(30)
12
13    # Create a BenchmarkIn node
14    # This node is receiving the messages on the input and measuring the FPS and latency.
15    # In the case that the input is with BenchmarkOut, the latency measurement is not always possible, as the message is not deep copied,
16    # which means that the timestamps stay the same and latency virtually increases over time.
17    benchmarkIn = p.create(dai.node.BenchmarkIn)
18    benchmarkIn.setRunOnHost(False) # The node can run on host or on device
19    benchmarkIn.sendReportEveryNMessages(100)
20    benchmarkIn.logReportsAsWarnings(False)
21    benchmarkIn.setLogLevel(dai.LogLevel.TRACE)
22
23    benchmarkOut.out.link(benchmarkIn.input)
24    outputQueue = benchmarkIn.report.createOutputQueue()
25    inputQueue = benchmarkOut.input.createInputQueue()
26
27    p.start()
28    imgFrame = dai.ImgFrame()
29    inputQueue.send(imgFrame)
30    while p.isRunning():
31        benchmarkReport = outputQueue.get()
32        assert isinstance(benchmarkReport, dai.BenchmarkReport)
33        print(f"FPS is {benchmarkReport.fps}")

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.