ON THIS PAGE

  • BenchmarkIn
  • How to place it
  • Inputs and Outputs
  • Usage
  • Examples of functionality
  • Reference

BenchmarkIn

The BenchmarkIn node is designed to receive messages and measure the frames per second (FPS) and latency of an incoming stream. It periodically generates performance reports, making it useful for benchmarking and performance analysis of your DepthAI pipelines.

How to place it

Python
C++

Python

Python
1import depthai as dai
2
3pipeline = dai.Pipeline()
4benchmarkIn = pipeline.create(dai.node.BenchmarkIn)
5
6# Configure to send a performance report every 100 messages and
7# choose whether the reports should be logged as warnings.
8benchmarkIn.sendReportEveryNMessages(100)
9benchmarkIn.logReportsAsWarnings(False)
10
11# For example, link the output of a NeuralNetwork to BenchmarkIn:
12neuralNetwork = pipeline.create(dai.node.NeuralNetwork)
13neuralNetwork.out.link(benchmarkIn.input)

Inputs and Outputs

Usage

BenchmarkIn is typically connected to the output of another node (for example, a neural network or camera node) so that it can monitor the performance of the data stream. The node periodically emits a BenchmarkReport that contains performance metrics such as FPS and latency.
Python
C++

Python

Python
1pipeline = dai.Pipeline()
2
3# Create a BenchmarkIn node and configure it:
4benchmarkIn = pipeline.create(dai.node.BenchmarkIn)
5benchmarkIn.sendReportEveryNMessages(100)
6benchmarkIn.logReportsAsWarnings(False)
7
8# Example: connecting BenchmarkIn to a NeuralNetwork node
9neuralNetwork = pipeline.create(dai.node.NeuralNetwork)
10neuralNetwork.out.link(benchmarkIn.input)
11
12# Create an output queue for receiving benchmark reports
13outputQueue = benchmarkIn.report.createOutputQueue()
14
15while pipeline.isRunning():
16    benchmarkReport = outputQueue.get()
17    print(f"FPS is {benchmarkReport.fps}")

Examples of functionality

Reference

class

dai::node::BenchmarkIn

variable
Input input
Receive messages as fast as possible
variable
Output passthrough
Passthrough for input messages (so the node can be placed between other nodes)
variable
Output report
Send a benchmark report when the set number of messages are received
function
void sendReportEveryNMessages(uint32_t n)
Specify how many messages to measure for each report
function
void setRunOnHost(bool runOnHost)
Specify whether to run on host or device By default, the node will run on device.
function
bool runOnHost()
Check if the node is set to run on host
function
void logReportsAsWarnings(bool logReportsAsWarnings)
Log the reports as warnings
function
void measureIndividualLatencies(bool attachLatencies)
Attach latencies to the report
function
void run()
inline function
DeviceNodeCRTP()
inline function
DeviceNodeCRTP(const std::shared_ptr< Device > & device)
inline function
DeviceNodeCRTP(std::unique_ptr< Properties > props)
inline function
DeviceNodeCRTP(std::unique_ptr< Properties > props, bool confMode)
inline function
DeviceNodeCRTP(const std::shared_ptr< Device > & device, std::unique_ptr< Properties > props, bool confMode)

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.