# BenchmarkIn

The BenchmarkIn node is designed to receive messages and measure the frames per second (FPS) and latency of an incoming stream. It
periodically generates performance reports, making it useful for benchmarking and performance analysis of your DepthAI pipelines.

## How to place it

#### Python

```python
import depthai as dai

pipeline = dai.Pipeline()
benchmarkIn = pipeline.create(dai.node.BenchmarkIn)

# Configure to send a performance report every 100 messages and
# choose whether the reports should be logged as warnings.
benchmarkIn.sendReportEveryNMessages(100)
benchmarkIn.logReportsAsWarnings(False)

# For example, link the output of a NeuralNetwork to BenchmarkIn:
neuralNetwork = pipeline.create(dai.node.NeuralNetwork)
neuralNetwork.out.link(benchmarkIn.input)
```

#### C++

```cpp
#include "depthai/depthai.hpp"

dai::Pipeline pipeline;
auto benchmarkIn = pipeline.create<dai::node::BenchmarkIn>();

// Configure BenchmarkIn to send a report every 100 messages and
// decide whether to log reports as warnings.
benchmarkIn->sendReportEveryNMessages(100);
benchmarkIn->logReportsAsWarnings(false);

// For example, link the output of a NeuralNetwork to BenchmarkIn:
auto neuralNetwork = pipeline.create<dai::node::NeuralNetwork>();
neuralNetwork->out.link(benchmarkIn->input);
```

## Inputs and Outputs

## Usage

BenchmarkIn is typically connected to the output of another node (for example, a neural network or camera node) so that it can
monitor the performance of the data stream. The node periodically emits a BenchmarkReport that contains performance metrics such
as FPS and latency.

#### Python

```python
pipeline = dai.Pipeline()

# Create a BenchmarkIn node and configure it:
benchmarkIn = pipeline.create(dai.node.BenchmarkIn)
benchmarkIn.sendReportEveryNMessages(100)
benchmarkIn.logReportsAsWarnings(False)

# Example: connecting BenchmarkIn to a NeuralNetwork node
neuralNetwork = pipeline.create(dai.node.NeuralNetwork)
neuralNetwork.out.link(benchmarkIn.input)

# Create an output queue for receiving benchmark reports
outputQueue = benchmarkIn.report.createOutputQueue()

while pipeline.isRunning():
    benchmarkReport = outputQueue.get()
    print(f"FPS is {benchmarkReport.fps}")
```

#### C++

```cpp
dai::Pipeline pipeline;
auto benchmarkIn = pipeline.create<dai::node::BenchmarkIn>();
benchmarkIn->sendReportEveryNMessages(100);
benchmarkIn->logReportsAsWarnings(false);

auto neuralNetwork = pipeline.create<dai::node::NeuralNetwork>();
neuralNetwork->out.link(benchmarkIn->input);

auto outputQueue = benchmarkIn->report.createOutputQueue();
while(pipeline.isRunning()) {
    auto benchmarkReport = outputQueue->get<dai::BenchmarkReport>();
    std::cout << "FPS is " << benchmarkReport.fps << std::endl;
}
```

## Examples of functionality

 * [Benchmark Camera](https://docs.luxonis.com/software-v3/depthai/examples/benchmark/benchmark_camera.md)
 * [Benchmark NN](https://docs.luxonis.com/software-v3/depthai/examples/benchmark/benchmark_nn.md)
 * [Benchmark Simple](https://docs.luxonis.com/software-v3/depthai/examples/benchmark/benchmark_simple.md)

## Reference

### dai::node::BenchmarkIn

Kind: class

#### Input input

Kind: variable

Receive messages as fast as possible

#### Output passthrough

Kind: variable

Passthrough for input messages (so the node can be placed between other nodes)

#### Output report

Kind: variable

Send a benchmark report when the set number of messages are received

#### void sendReportEveryNMessages(uint32_t n)

Kind: function

Specify how many messages to measure for each report

#### void setRunOnHost(bool runOnHost)

Kind: function

Specify whether to run on host or device By default, the node will run on device.

#### bool runOnHost()

Kind: function

Check if the node is set to run on host

#### void logReportsAsWarnings(bool logReportsAsWarnings)

Kind: function

Log the reports as warnings

#### void measureIndividualLatencies(bool attachLatencies)

Kind: function

Attach latencies to the report

#### void run()

Kind: function

#### DeviceNodeCRTP()

Kind: function

#### DeviceNodeCRTP(const std::shared_ptr< Device > & device)

Kind: function

#### DeviceNodeCRTP(std::unique_ptr< Properties > props)

Kind: function

#### DeviceNodeCRTP(std::unique_ptr< Properties > props, bool confMode)

Kind: function

#### DeviceNodeCRTP(const std::shared_ptr< Device > & device, std::unique_ptr< Properties > props, bool confMode)

Kind: function

### Need assistance?

Head over to [Discussion Forum](https://discuss.luxonis.com/) for technical support or any other questions you might have.
