DepthAI
  • DepthAI Components
    • AprilTags
    • Benchmark
    • Camera
    • Calibration
    • DetectionNetwork
    • Events
    • FeatureTracker
    • Gate
    • HostNodes
    • ImageAlign
    • ImageManip
    • IMU
    • Misc
    • Model Zoo
    • NeuralDepth
    • NeuralNetwork
    • ObjectTracker
    • RecordReplay
    • RGBD
    • Script
    • SpatialDetectionNetwork
    • SpatialLocationCalculator
    • StereoDepth
    • Sync
    • VideoEncoder
    • Visualizer
    • Warp
    • RVC2-specific
  • Advanced Tutorials
  • API Reference
  • Tools
Software Stack

ON THIS PAGE

  • Demo
  • Source code

Benchmark Simple

Supported on:RVC2RVC4
This simple example measures pipeline latency by connecting a BenchmarkOut node to a BenchmarkIn node.Since nodes only pass message pointers (no data copying), the latency is very low, typically in the order of microseconds.

Demo

Command Line
1Benchmark $ python3.9 benchmark_simple.py
2[2025-03-21 14:44:42.051] [ThreadedNode] [trace] Frame latency: 5.041e-06 s
3[2025-03-21 14:44:42.086] [ThreadedNode] [trace] Frame latency: 1.0166e-05 s
4[2025-03-21 14:44:42.122] [ThreadedNode] [trace] Frame latency: 6.25e-06 s
5[2025-03-21 14:44:42.156] [ThreadedNode] [trace] Frame latency: 6.459e-06 s
6[2025-03-21 14:44:42.185] [ThreadedNode] [trace] Frame latency: 7.875e-06 s
7[2025-03-21 14:44:42.221] [ThreadedNode] [trace] Frame latency: 1.1542e-05 s
This example requires the DepthAI v3 API, see installation instructions.

Source code

Python

Python
GitHub
1import depthai as dai
2
3device= dai.Device()
4
5with dai.Pipeline(device) as p:
6    # Create a BenchmarkOut node
7    # It will listen on the input to get the first message and then send it out at a specified rate
8    # The node sends the same message out (creates new pointers), not deep copies.
9    benchmarkOut = p.create(dai.node.BenchmarkOut)
10    benchmarkOut.setRunOnHost(True) # The node can run on host or on device
11    benchmarkOut.setFps(30)
12
13    # Create a BenchmarkIn node
14    # This node is receiving the messages on the input and measuring the FPS and latency.
15    # In the case that the input is with BenchmarkOut, the latency measurement is not always possible, as the message is not deep copied,
16    # which means that the timestamps stay the same and latency virtually increases over time.
17    benchmarkIn = p.create(dai.node.BenchmarkIn)
18    benchmarkIn.setRunOnHost(False) # The node can run on host or on device
19    benchmarkIn.sendReportEveryNMessages(100)
20    benchmarkIn.logReportsAsWarnings(False)
21    benchmarkIn.setLogLevel(dai.LogLevel.TRACE)
22
23    benchmarkOut.out.link(benchmarkIn.input)
24    outputQueue = benchmarkIn.report.createOutputQueue()
25    inputQueue = benchmarkOut.input.createInputQueue()
26
27    p.start()
28    imgFrame = dai.ImgFrame()
29    inputQueue.send(imgFrame)
30    while p.isRunning():
31        benchmarkReport = outputQueue.get()
32        assert isinstance(benchmarkReport, dai.BenchmarkReport)
33        print(f"FPS is {benchmarkReport.fps}")

C++

1#include <iostream>
2
3#include "depthai/depthai.hpp"
4
5int main() {
6    // Create pipeline without implicit device
7    dai::Pipeline pipeline;
8
9    // Create a BenchmarkOut node
10    // It will listen on the input to get the first message and then send it out at a specified rate
11    // The node sends the same message out (creates new pointers), not deep copies.
12    auto benchmarkOut = pipeline.create<dai::node::BenchmarkOut>();
13    benchmarkOut->setRunOnHost(true);  // The node can run on host or on device
14    benchmarkOut->setFps(30);
15
16    // Create a BenchmarkIn node
17    // This node is receiving the messages on the input and measuring the FPS and latency.
18    // In the case that the input is with BenchmarkOut, the latency measurement is not always possible, as the message is not deep copied,
19    // which means that the timestamps stay the same and latency virtually increases over time.
20    auto benchmarkIn = pipeline.create<dai::node::BenchmarkIn>();
21    benchmarkIn->setRunOnHost(true);  // The node can run on host or on device
22    benchmarkIn->sendReportEveryNMessages(100);
23    benchmarkIn->logReportsAsWarnings(false);
24    benchmarkIn->setLogLevel(dai::LogLevel::TRACE);
25
26    benchmarkOut->out.link(benchmarkIn->input);
27    auto outputQueue = benchmarkIn->report.createOutputQueue();
28    auto inputQueue = benchmarkOut->input.createInputQueue();
29
30    pipeline.start();
31    auto imgFrame = std::make_shared<dai::ImgFrame>();
32    inputQueue->send(imgFrame);
33    while(pipeline.isRunning()) {
34        auto benchmarkReport = outputQueue->get<dai::BenchmarkReport>();
35        std::cout << "FPS is " << benchmarkReport->fps << std::endl;
36    }
37
38    return 0;
39}

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.