# Benchmark Simple

This simple example measures pipeline latency by connecting a
[BenchmarkOut](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/benchmark_out.md) node to a
[BenchmarkIn](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/benchmark_in.md) node.

Since nodes only pass message pointers (no data copying), the latency is very low, typically in the order of microseconds.

## Demo

```bash
Benchmark $ python3.9 benchmark_simple.py
[2025-03-21 14:44:42.051] [ThreadedNode] [trace] Frame latency: 5.041e-06 s
[2025-03-21 14:44:42.086] [ThreadedNode] [trace] Frame latency: 1.0166e-05 s
[2025-03-21 14:44:42.122] [ThreadedNode] [trace] Frame latency: 6.25e-06 s
[2025-03-21 14:44:42.156] [ThreadedNode] [trace] Frame latency: 6.459e-06 s
[2025-03-21 14:44:42.185] [ThreadedNode] [trace] Frame latency: 7.875e-06 s
[2025-03-21 14:44:42.221] [ThreadedNode] [trace] Frame latency: 1.1542e-05 s
```

This example requires the DepthAI v3 API, see [installation instructions](https://docs.luxonis.com/software-v3/depthai.md).

## Source code

#### Python

```python
import depthai as dai

device= dai.Device()

with dai.Pipeline(device) as p:
    # Create a BenchmarkOut node
    # It will listen on the input to get the first message and then send it out at a specified rate
    # The node sends the same message out (creates new pointers), not deep copies.
    benchmarkOut = p.create(dai.node.BenchmarkOut)
    benchmarkOut.setRunOnHost(True) # The node can run on host or on device
    benchmarkOut.setFps(30)

    # Create a BenchmarkIn node
    # This node is receiving the messages on the input and measuring the FPS and latency.
    # In the case that the input is with BenchmarkOut, the latency measurement is not always possible, as the message is not deep copied,
    # which means that the timestamps stay the same and latency virtually increases over time.
    benchmarkIn = p.create(dai.node.BenchmarkIn)
    benchmarkIn.setRunOnHost(False) # The node can run on host or on device
    benchmarkIn.sendReportEveryNMessages(100)
    benchmarkIn.logReportsAsWarnings(False)
    benchmarkIn.setLogLevel(dai.LogLevel.TRACE)

    benchmarkOut.out.link(benchmarkIn.input)
    outputQueue = benchmarkIn.report.createOutputQueue()
    inputQueue = benchmarkOut.input.createInputQueue()

    p.start()
    imgFrame = dai.ImgFrame()
    inputQueue.send(imgFrame)
    while p.isRunning():
        benchmarkReport = outputQueue.get()
        assert isinstance(benchmarkReport, dai.BenchmarkReport)
        print(f"FPS is {benchmarkReport.fps}")
```

#### C++

```cpp
#include <atomic>
#include <csignal>
#include <iostream>

#include "depthai/depthai.hpp"

std::atomic<bool> quitEvent(false);

void signalHandler(int) {
    quitEvent = true;
}

int main() {
    signal(SIGTERM, signalHandler);
    signal(SIGINT, signalHandler);

    // Create pipeline without implicit device
    dai::Pipeline pipeline;

    // Create a BenchmarkOut node
    // It will listen on the input to get the first message and then send it out at a specified rate
    // The node sends the same message out (creates new pointers), not deep copies.
    auto benchmarkOut = pipeline.create<dai::node::BenchmarkOut>();
    benchmarkOut->setRunOnHost(true);  // The node can run on host or on device
    benchmarkOut->setFps(30);

    // Create a BenchmarkIn node
    // This node is receiving the messages on the input and measuring the FPS and latency.
    // In the case that the input is with BenchmarkOut, the latency measurement is not always possible, as the message is not deep copied,
    // which means that the timestamps stay the same and latency virtually increases over time.
    auto benchmarkIn = pipeline.create<dai::node::BenchmarkIn>();
    benchmarkIn->setRunOnHost(true);  // The node can run on host or on device
    benchmarkIn->sendReportEveryNMessages(100);
    benchmarkIn->logReportsAsWarnings(false);
    benchmarkIn->setLogLevel(dai::LogLevel::TRACE);

    benchmarkOut->out.link(benchmarkIn->input);
    auto outputQueue = benchmarkIn->report.createOutputQueue();
    auto inputQueue = benchmarkOut->input.createInputQueue();

    pipeline.start();
    auto imgFrame = std::make_shared<dai::ImgFrame>();
    inputQueue->send(imgFrame);
    while(pipeline.isRunning() && !quitEvent) {
        auto benchmarkReport = outputQueue->get<dai::BenchmarkReport>();
        std::cout << "FPS is " << benchmarkReport->fps << std::endl;
    }

    pipeline.stop();
    pipeline.wait();

    return 0;
}
```

### Need assistance?

Head over to [Discussion Forum](https://discuss.luxonis.com/) for technical support or any other questions you might have.
