Software Stack
DepthAI
  • DepthAI Components
    • AprilTags
    • Benchmark
    • Camera
    • DetectionNetwork
    • EdgeDetector
    • Events
    • FeatureTracker
    • HostNodes
    • ImageAlign
    • ImageManip
    • IMU
    • Misc
    • Modelzoo
    • NeuralNetwork
    • RecordReplay
    • RGBD
    • Script
    • SpatialDetectionNetwork
    • SpatialLocationCalculator
    • StereoDepth
    • Sync
    • SystemLogger
    • VideoEncoder
    • Visualizer
    • Warp
    • RVC2-specific
  • Advanced Tutorials
  • API Reference
  • Tools

ON THIS PAGE

  • Benchmark NN
  • Demo
  • Setup
  • Pipeline
  • Source code

Benchmark NN

This example showcases how to use both the BenchmarkOut node and the BenchmarkIn node to measure the performance of a NN model.BenchmarkIn outputs messages as fast as possible, which is used to measure the performance of a NN model (by linking BenchmarkOut -> NeuralNetwork -> BenchmarkIn).

Demo

The yolov6-nano NN model should run at ~273 FPS on an OAK4 camera, and at ~67 FPS on an OAK camera.
Command Line
1Benchmark $ python3.9 benchmark_nn.py
2FPS is 273.2430114746094
3FPS is 273.161376953125
4FPS is 273.22802734375

Setup

This example requires the DepthAI v3 API, see installation instructions.

Pipeline

Source code

Python
C++

Python

Python
GitHub
1import depthai as dai
2import numpy as np
3
4
5# First prepare the model for benchmarking
6device = dai.Device()
7modelPath = dai.getModelFromZoo(dai.NNModelDescription("yolov6-nano", platform=device.getPlatformAsString()))
8modelArhive = dai.NNArchive(modelPath)
9inputSize = modelArhive.getInputSize()
10type = modelArhive.getConfig().model.inputs[0].preprocessing.daiType
11
12if type:
13    try:
14        frameType = dai.ImgFrame.Type.__getattribute__(type)
15    except AttributeError:
16        type = None
17
18if not type:
19    if device.getPlatform() == dai.Platform.RVC2:
20        frameType = dai.ImgFrame.Type.BGR888p
21    else:
22        frameType = dai.ImgFrame.Type.BGR888i
23
24
25# Construct the input (white) image for benchmarking
26img = np.ones((inputSize[1], inputSize[0], 3), np.uint8) * 255
27inputFrame = dai.ImgFrame()
28inputFrame.setCvFrame(img, frameType)
29
30with dai.Pipeline(device) as p:
31    benchmarkOut = p.create(dai.node.BenchmarkOut)
32    benchmarkOut.setRunOnHost(False) # The node can run on host or on device
33    benchmarkOut.setFps(-1) # As fast as possible
34
35    neuralNetwork = p.create(dai.node.NeuralNetwork).build(benchmarkOut.out, modelArhive)
36
37    benchmarkIn = p.create(dai.node.BenchmarkIn)
38    benchmarkIn.setRunOnHost(False) # The node can run on host or on device
39    benchmarkIn.sendReportEveryNMessages(100)
40    benchmarkIn.logReportsAsWarnings(False)
41    neuralNetwork.out.link(benchmarkIn.input)
42
43    outputQueue = benchmarkIn.report.createOutputQueue()
44    inputQueue = benchmarkOut.input.createInputQueue()
45
46    p.start()
47    inputQueue.send(inputFrame) # Send the input image only once
48    while p.isRunning():
49        benchmarkReport = outputQueue.get()
50        assert isinstance(benchmarkReport, dai.BenchmarkReport)
51        print(f"FPS is {benchmarkReport.fps}")

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.