DepthAI
Software Stack

ON THIS PAGE

  • Why HFR matters
  • Supported HFR modes
  • Example applications
  • Included example pipelines
  • Source code
  • High Frame Rate Object Detection (YOLOv6)
  • Small live preview
  • Save encoded stream

Camera high frame rate (HFR)

Supported on:RVC4
High Frame Rate (HFR) mode in DepthAI 3.4.0 unlocks ultra-fast perception on OAK4 devices with the RVC4 platform and IMX586 sensor. These pipelines can capture and process up to 480 FPS, including neural network inference on every frame.

Why HFR matters

  • Capture fast motion with reduced blur.
  • Run neural inference on every frame at full throughput.
  • Reduce end-to-end perception latency for closed-loop systems.

Supported HFR modes

ResolutionFrame rate
1920 x 1080240 FPS
1280 x 720480 FPS
Arbitrary FPS and custom HFR resolutions are not yet supported. If your model expects a different input shape, use ImageManip for on-device adaptation.

Example applications

  • Industrial automation with rapid part movement.
  • Robotics workloads that require faster control loops.
  • Sports analytics and high-speed motion tracking.
  • High-speed visual inspection systems.

Included example pipelines

This example requires the DepthAI v3 API, see installation instructions.

Source code

High Frame Rate Object Detection (YOLOv6)

Python

Python
GitHub
1#!/usr/bin/env python3
2import depthai as dai
3import sys
4
5FPS = 480
6
7with dai.Pipeline() as pipeline:
8    device = pipeline.getDefaultDevice()
9    platform = device.getPlatform()
10    if platform != dai.Platform.RVC4:
11        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
12        sys.exit(0)
13
14    # Exit cleanly if the selected HFR mode is not advertised by CAM_A.
15    supportsRequestedFps = False
16    for cameraFeature in device.getConnectedCameraFeatures():
17        if cameraFeature.socket != dai.CameraBoardSocket.CAM_A:
18            continue
19        for config in cameraFeature.configs:
20            if config.width == 1280 and config.height == 720 and config.maxFps >= FPS:
21                supportsRequestedFps = True
22                break
23        break
24    if not supportsRequestedFps:
25        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
26        sys.exit(0)
27
28    # Download the model
29    nnArchivePath = dai.getModelFromZoo(dai.NNModelDescription("yolov6-nano", platform="RVC4"))
30    nnArchive = dai.NNArchive(nnArchivePath)
31    inputSize = nnArchive.getInputSize()
32    cameraNode = pipeline.create(dai.node.Camera).build()
33
34    # Configure the ImageManip as in HFR mode requesting arbitrary outputs is not yet supported
35    cameraOutput = cameraNode.requestOutput((1280, 720), fps=FPS)
36    imageManip = pipeline.create(dai.node.ImageManip)
37    imageManip.initialConfig.setOutputSize(inputSize[0], inputSize[1])
38    imageManip.setMaxOutputFrameSize(int(inputSize[0] * inputSize[1] * 3))
39    imageManip.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888i)
40    imageManip.inputImage.setMaxSize(12)
41    cameraOutput.link(imageManip.inputImage)
42
43    # Configure the DetectionNetwork
44    detectionNetwork = pipeline.create(dai.node.DetectionNetwork)
45    detectionNetwork.setNNArchive(nnArchive)
46    imageManip.out.link(detectionNetwork.input)
47
48    benchmarkIn = pipeline.create(dai.node.BenchmarkIn)
49    benchmarkIn.setRunOnHost(True)
50    benchmarkIn.sendReportEveryNMessages(FPS)
51    detectionNetwork.out.link(benchmarkIn.input)
52
53
54    qDet = detectionNetwork.out.createOutputQueue()
55    pipeline.start()
56
57    while pipeline.isRunning():
58        inDet: dai.ImgDetections = qDet.get()
59        # print(f"Got {len(inDet.detections)} nn detections ")

Small live preview

Python

Python
GitHub
1#!/usr/bin/env python3
2import depthai as dai
3import sys
4import time
5import cv2
6
7SIZE = (1280, 720)
8FPS = 480
9
10# SIZE = (1920, 1080)
11# FPS = 240
12
13with dai.Pipeline() as pipeline:
14    device = pipeline.getDefaultDevice()
15    platform = device.getPlatform()
16    if platform != dai.Platform.RVC4:
17        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
18        sys.exit(0)
19
20    # Exit cleanly if the selected HFR mode is not advertised by CAM_A.
21    supportsRequestedFps = False
22    for cameraFeature in device.getConnectedCameraFeatures():
23        if cameraFeature.socket != dai.CameraBoardSocket.CAM_A:
24            continue
25        for config in cameraFeature.configs:
26            if config.width == SIZE[0] and config.height == SIZE[1] and config.maxFps >= FPS:
27                supportsRequestedFps = True
28                break
29        break
30    if not supportsRequestedFps:
31        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
32        sys.exit(0)
33
34    cam = pipeline.create(dai.node.Camera).build()
35    benchmarkIn = pipeline.create(dai.node.BenchmarkIn)
36    benchmarkIn.setRunOnHost(True)
37    benchmarkIn.sendReportEveryNMessages(FPS)
38
39    imageManip = pipeline.create(dai.node.ImageManip)
40    imageManip.initialConfig.setOutputSize(250, 250)
41    imageManip.setMaxOutputFrameSize(int(250* 250 * 1.6))
42
43    # One of the two modes can be selected
44    # NOTE: Generic resolutions are not yet supported through camera node when using HFR mode
45    output = cam.requestOutput(SIZE, fps=FPS)
46
47    output.link(imageManip.inputImage)
48    imageManip.out.link(benchmarkIn.input)
49
50    outputQueue = imageManip.out.createOutputQueue()
51
52    pipeline.start()
53    while pipeline.isRunning():
54        imgFrame = outputQueue.get()
55        assert isinstance(imgFrame, dai.ImgFrame)
56        cv2.imshow("frame", imgFrame.getCvFrame())
57        cv2.waitKey(1)

Save encoded stream

Python

Python
GitHub
1import depthai as dai
2
3# Capture Ctrl+C and set a flag to stop the loop
4import time
5import cv2
6import threading
7import signal
8import sys
9
10PROFILE = dai.VideoEncoderProperties.Profile.H264_MAIN
11
12quitEvent = threading.Event()
13signal.signal(signal.SIGTERM, lambda *_args: quitEvent.set())
14signal.signal(signal.SIGINT, lambda *_args: quitEvent.set())
15
16SIZE = (1280, 720)
17FPS = 480
18
19# SIZE = (1920, 1080)
20# FPS = 240
21
22class VideoSaver(dai.node.HostNode):
23    def __init__(self, *args, **kwargs):
24        dai.node.HostNode.__init__(self, *args, **kwargs)
25        self.file_handle = open('video_hfr.encoded', 'wb')
26
27    def build(self, *args):
28        self.link_args(*args)
29        return self
30
31    def process(self, frame):
32        frame.getData().tofile(self.file_handle)
33
34with dai.Pipeline() as pipeline:
35    device = pipeline.getDefaultDevice()
36    platform = device.getPlatform()
37    if platform != dai.Platform.RVC4:
38        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
39        sys.exit(0)
40
41    # Exit cleanly if the selected HFR mode is not advertised by CAM_A.
42    supportsRequestedFps = False
43    for cameraFeature in device.getConnectedCameraFeatures():
44        if cameraFeature.socket != dai.CameraBoardSocket.CAM_A:
45            continue
46        for config in cameraFeature.configs:
47            if config.width == SIZE[0] and config.height == SIZE[1] and config.maxFps >= FPS:
48                supportsRequestedFps = True
49                break
50        break
51    if not supportsRequestedFps:
52        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
53        sys.exit(0)
54
55    camRgb = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
56    output = camRgb.requestOutput(SIZE, fps=FPS)
57
58    # ImageManip is added to workaround a limitation with VideoEncoder with native resolutions
59    # This limitation will be lifted in the future
60    imageManip = pipeline.create(dai.node.ImageManip)
61    imageManip.initialConfig.setOutputSize(SIZE[0], SIZE[1] + 10) # To avoid a passthrough
62    imageManip.setMaxOutputFrameSize(int(SIZE[0] * (SIZE[1] + 10) * 1.6))
63    imageManip.inputImage.setMaxSize(12)
64    output.link(imageManip.inputImage)
65    output = imageManip.out
66
67    benchmarkIn = pipeline.create(dai.node.BenchmarkIn)
68    benchmarkIn.setRunOnHost(True)
69
70    encoded = pipeline.create(dai.node.VideoEncoder).build(output,
71            frameRate = FPS,
72            profile = PROFILE)
73    encoded.out.link(benchmarkIn.input)
74    saver = pipeline.create(VideoSaver).build(encoded.out)
75
76    pipeline.start()
77    print("Started to save video to video.encoded")
78    print("Press Ctrl+C to stop")
79    timeStart = time.monotonic()
80    while pipeline.isRunning() and not quitEvent.is_set():
81        time.sleep(1)
82    pipeline.stop()
83    pipeline.wait()
84    saver.file_handle.close()
85
86print("To view the encoded data, convert the stream file (.encoded) into a video file (.mp4) using a command below:")
87print(f"ffmpeg -framerate {FPS} -i video_hfr.encoded -c copy video_hfr.mp4")
88
89print("If the FPS is not set correctly, you can ask ffmpeg to generate it with the command below")
90
91print(f"""
92ffmpeg -fflags +genpts -r {FPS} -i video_hfr.encoded \\
93  -vsync cfr -fps_mode cfr \\
94  -video_track_timescale {FPS}00 \\
95  -c:v copy \\
96  video_hfr.mp4
97""")

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.