DepthAI
  • DepthAI Components
    • AprilTags
    • Benchmark
    • Camera
    • Calibration
    • DetectionNetwork
    • Events
    • FeatureTracker
    • Gate
    • HostNodes
    • ImageAlign
    • ImageManip
    • IMU
    • Misc
    • Model Zoo
    • NeuralDepth
    • NeuralNetwork
    • ObjectTracker
    • PointCloud
    • RecordReplay
    • RGBD
    • Script
    • SpatialDetectionNetwork
    • SpatialLocationCalculator
    • StereoDepth
    • Sync
    • VideoEncoder
    • Visualizer
    • Warp
    • RVC2-specific
  • Advanced Tutorials
  • API Reference
  • Tools
Software Stack

ON THIS PAGE

  • Demo
  • Source code

Benchmark Simple

Supported on:RVC2RVC4
This simple example measures pipeline latency by connecting a BenchmarkOut node to a BenchmarkIn node.Since nodes only pass message pointers (no data copying), the latency is very low, typically in the order of microseconds.

Demo

Command Line
1Benchmark $ python3.9 benchmark_simple.py
2[2025-03-21 14:44:42.051] [ThreadedNode] [trace] Frame latency: 5.041e-06 s
3[2025-03-21 14:44:42.086] [ThreadedNode] [trace] Frame latency: 1.0166e-05 s
4[2025-03-21 14:44:42.122] [ThreadedNode] [trace] Frame latency: 6.25e-06 s
5[2025-03-21 14:44:42.156] [ThreadedNode] [trace] Frame latency: 6.459e-06 s
6[2025-03-21 14:44:42.185] [ThreadedNode] [trace] Frame latency: 7.875e-06 s
7[2025-03-21 14:44:42.221] [ThreadedNode] [trace] Frame latency: 1.1542e-05 s
This example requires the DepthAI v3 API, see installation instructions.

Source code

Python

Python
GitHub
1import depthai as dai
2
3device= dai.Device()
4
5with dai.Pipeline(device) as p:
6    # Create a BenchmarkOut node
7    # It will listen on the input to get the first message and then send it out at a specified rate
8    # The node sends the same message out (creates new pointers), not deep copies.
9    benchmarkOut = p.create(dai.node.BenchmarkOut)
10    benchmarkOut.setRunOnHost(True) # The node can run on host or on device
11    benchmarkOut.setFps(30)
12
13    # Create a BenchmarkIn node
14    # This node is receiving the messages on the input and measuring the FPS and latency.
15    # In the case that the input is with BenchmarkOut, the latency measurement is not always possible, as the message is not deep copied,
16    # which means that the timestamps stay the same and latency virtually increases over time.
17    benchmarkIn = p.create(dai.node.BenchmarkIn)
18    benchmarkIn.setRunOnHost(False) # The node can run on host or on device
19    benchmarkIn.sendReportEveryNMessages(100)
20    benchmarkIn.logReportsAsWarnings(False)
21    benchmarkIn.setLogLevel(dai.LogLevel.TRACE)
22
23    benchmarkOut.out.link(benchmarkIn.input)
24    outputQueue = benchmarkIn.report.createOutputQueue()
25    inputQueue = benchmarkOut.input.createInputQueue()
26
27    p.start()
28    imgFrame = dai.ImgFrame()
29    inputQueue.send(imgFrame)
30    while p.isRunning():
31        benchmarkReport = outputQueue.get()
32        assert isinstance(benchmarkReport, dai.BenchmarkReport)
33        print(f"FPS is {benchmarkReport.fps}")

C++

1#include <atomic>
2#include <csignal>
3#include <iostream>
4
5#include "depthai/depthai.hpp"
6
7std::atomic<bool> quitEvent(false);
8
9void signalHandler(int) {
10    quitEvent = true;
11}
12
13int main() {
14    signal(SIGTERM, signalHandler);
15    signal(SIGINT, signalHandler);
16
17    // Create pipeline without implicit device
18    dai::Pipeline pipeline;
19
20    // Create a BenchmarkOut node
21    // It will listen on the input to get the first message and then send it out at a specified rate
22    // The node sends the same message out (creates new pointers), not deep copies.
23    auto benchmarkOut = pipeline.create<dai::node::BenchmarkOut>();
24    benchmarkOut->setRunOnHost(true);  // The node can run on host or on device
25    benchmarkOut->setFps(30);
26
27    // Create a BenchmarkIn node
28    // This node is receiving the messages on the input and measuring the FPS and latency.
29    // In the case that the input is with BenchmarkOut, the latency measurement is not always possible, as the message is not deep copied,
30    // which means that the timestamps stay the same and latency virtually increases over time.
31    auto benchmarkIn = pipeline.create<dai::node::BenchmarkIn>();
32    benchmarkIn->setRunOnHost(true);  // The node can run on host or on device
33    benchmarkIn->sendReportEveryNMessages(100);
34    benchmarkIn->logReportsAsWarnings(false);
35    benchmarkIn->setLogLevel(dai::LogLevel::TRACE);
36
37    benchmarkOut->out.link(benchmarkIn->input);
38    auto outputQueue = benchmarkIn->report.createOutputQueue();
39    auto inputQueue = benchmarkOut->input.createInputQueue();
40
41    pipeline.start();
42    auto imgFrame = std::make_shared<dai::ImgFrame>();
43    inputQueue->send(imgFrame);
44    while(pipeline.isRunning() && !quitEvent) {
45        auto benchmarkReport = outputQueue->get<dai::BenchmarkReport>();
46        std::cout << "FPS is " << benchmarkReport->fps << std::endl;
47    }
48
49    pipeline.stop();
50    pipeline.wait();
51
52    return 0;
53}

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.