# Camera high frame rate (HFR)

High Frame Rate (HFR) mode in DepthAI 3.4.0 unlocks ultra-fast perception on OAK4 devices with the RVC4 platform and IMX586
sensor. These pipelines can capture and process up to 480 FPS, including neural network inference on every frame.

> **Preview feature**
> HFR mode is an early preview in DepthAI
> `3.4.0`
> . Current modes are fixed to
> `1920x1080 @ 240 FPS`
> and
> `1280x720 @ 480 FPS`
> .

## Why HFR matters

 * Capture fast motion with reduced blur.
 * Run neural inference on every frame at full throughput.
 * Reduce end-to-end perception latency for closed-loop systems.

## Supported HFR modes

| Resolution | Frame rate |
| --- | --- |
| 1920 x 1080 | 240 FPS |
| 1280 x 720 | 480 FPS |

Arbitrary FPS and custom HFR resolutions are not yet supported. If your model expects a different input shape, use
[ImageManip](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/image_manip.md) for on-device adaptation.

## Example applications

 * Industrial automation with rapid part movement.
 * Robotics workloads that require faster control loops.
 * Sports analytics and high-speed motion tracking.
 * High-speed visual inspection systems.

## Included example pipelines

### Object Detection

Run YOLOv6 with HFR input at up to 480 FPS.

[Object Detection](https://github.com/luxonis/depthai-core/blob/v3.4.0/examples/python/HFR/hfr_nn.py)

### Small Live Preview

Display a lightweight preview stream at 240/480 FPS.

[Small Live Preview](https://github.com/luxonis/depthai-core/blob/v3.4.0/examples/python/HFR/hfr_small_preview.py)

### Video Encoding

Encode and save high-frame-rate video streams.

[Video Encoding](https://github.com/luxonis/depthai-core/blob/v3.4.0/examples/python/HFR/hfr_save_encoded.py)

This example requires the DepthAI v3 API, see [installation instructions](https://docs.luxonis.com/software-v3/depthai.md).

## Source code

### High Frame Rate Object Detection (YOLOv6)

#### Python

```python
#!/usr/bin/env python3
import depthai as dai
import sys

FPS = 480

with dai.Pipeline() as pipeline:
    device = pipeline.getDefaultDevice()
    platform = device.getPlatform()
    if platform != dai.Platform.RVC4:
        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
        sys.exit(0)

    # Exit cleanly if the selected HFR mode is not advertised by CAM_A.
    supportsRequestedFps = False
    for cameraFeature in device.getConnectedCameraFeatures():
        if cameraFeature.socket != dai.CameraBoardSocket.CAM_A:
            continue
        for config in cameraFeature.configs:
            if config.width == 1280 and config.height == 720 and config.maxFps >= FPS:
                supportsRequestedFps = True
                break
        break
    if not supportsRequestedFps:
        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
        sys.exit(0)

    # Download the model
    nnArchivePath = dai.getModelFromZoo(dai.NNModelDescription("yolov6-nano", platform="RVC4"))
    nnArchive = dai.NNArchive(nnArchivePath)
    inputSize = nnArchive.getInputSize()
    cameraNode = pipeline.create(dai.node.Camera).build()

    # Configure the ImageManip as in HFR mode requesting arbitrary outputs is not yet supported
    cameraOutput = cameraNode.requestOutput((1280, 720), fps=FPS)
    imageManip = pipeline.create(dai.node.ImageManip)
    imageManip.initialConfig.setOutputSize(inputSize[0], inputSize[1])
    imageManip.setMaxOutputFrameSize(int(inputSize[0] * inputSize[1] * 3))
    imageManip.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888i)
    imageManip.inputImage.setMaxSize(12)
    cameraOutput.link(imageManip.inputImage)

    # Configure the DetectionNetwork
    detectionNetwork = pipeline.create(dai.node.DetectionNetwork)
    detectionNetwork.setNNArchive(nnArchive)
    imageManip.out.link(detectionNetwork.input)

    benchmarkIn = pipeline.create(dai.node.BenchmarkIn)
    benchmarkIn.setRunOnHost(True)
    benchmarkIn.sendReportEveryNMessages(FPS)
    detectionNetwork.out.link(benchmarkIn.input)

    qDet = detectionNetwork.out.createOutputQueue()
    pipeline.start()

    while pipeline.isRunning():
        inDet: dai.ImgDetections = qDet.get()
        # print(f"Got {len(inDet.detections)} nn detections ")
```

### Small live preview

#### Python

```python
#!/usr/bin/env python3
import depthai as dai
import sys
import time
import cv2

SIZE = (1280, 720)
FPS = 480

# SIZE = (1920, 1080)
# FPS = 240

with dai.Pipeline() as pipeline:
    device = pipeline.getDefaultDevice()
    platform = device.getPlatform()
    if platform != dai.Platform.RVC4:
        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
        sys.exit(0)

    # Exit cleanly if the selected HFR mode is not advertised by CAM_A.
    supportsRequestedFps = False
    for cameraFeature in device.getConnectedCameraFeatures():
        if cameraFeature.socket != dai.CameraBoardSocket.CAM_A:
            continue
        for config in cameraFeature.configs:
            if config.width == SIZE[0] and config.height == SIZE[1] and config.maxFps >= FPS:
                supportsRequestedFps = True
                break
        break
    if not supportsRequestedFps:
        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
        sys.exit(0)

    cam = pipeline.create(dai.node.Camera).build()
    benchmarkIn = pipeline.create(dai.node.BenchmarkIn)
    benchmarkIn.setRunOnHost(True)
    benchmarkIn.sendReportEveryNMessages(FPS)

    imageManip = pipeline.create(dai.node.ImageManip)
    imageManip.initialConfig.setOutputSize(250, 250)
    imageManip.setMaxOutputFrameSize(int(250* 250 * 1.6))

    # One of the two modes can be selected
    # NOTE: Generic resolutions are not yet supported through camera node when using HFR mode
    output = cam.requestOutput(SIZE, fps=FPS)

    output.link(imageManip.inputImage)
    imageManip.out.link(benchmarkIn.input)

    outputQueue = imageManip.out.createOutputQueue()

    pipeline.start()
    while pipeline.isRunning():
        imgFrame = outputQueue.get()
        assert isinstance(imgFrame, dai.ImgFrame)
        cv2.imshow("frame", imgFrame.getCvFrame())
        cv2.waitKey(1)
```

### Save encoded stream

#### Python

```python
import depthai as dai

# Capture Ctrl+C and set a flag to stop the loop
import time
import cv2
import threading
import signal
import sys

PROFILE = dai.VideoEncoderProperties.Profile.H264_MAIN

quitEvent = threading.Event()
signal.signal(signal.SIGTERM, lambda *_args: quitEvent.set())
signal.signal(signal.SIGINT, lambda *_args: quitEvent.set())

SIZE = (1280, 720)
FPS = 480

# SIZE = (1920, 1080)
# FPS = 240

class VideoSaver(dai.node.HostNode):
    def __init__(self, *args, **kwargs):
        dai.node.HostNode.__init__(self, *args, **kwargs)
        self.file_handle = open('video_hfr.encoded', 'wb')

    def build(self, *args):
        self.link_args(*args)
        return self

    def process(self, frame):
        frame.getData().tofile(self.file_handle)

with dai.Pipeline() as pipeline:
    device = pipeline.getDefaultDevice()
    platform = device.getPlatform()
    if platform != dai.Platform.RVC4:
        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
        sys.exit(0)

    # Exit cleanly if the selected HFR mode is not advertised by CAM_A.
    supportsRequestedFps = False
    for cameraFeature in device.getConnectedCameraFeatures():
        if cameraFeature.socket != dai.CameraBoardSocket.CAM_A:
            continue
        for config in cameraFeature.configs:
            if config.width == SIZE[0] and config.height == SIZE[1] and config.maxFps >= FPS:
                supportsRequestedFps = True
                break
        break
    if not supportsRequestedFps:
        print("This example is only supported on IMX586 and Luxonis OS 1.20.5 or higher", file=sys.stderr)
        sys.exit(0)

    camRgb = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
    output = camRgb.requestOutput(SIZE, fps=FPS)

    # ImageManip is added to workaround a limitation with VideoEncoder with native resolutions
    # This limitation will be lifted in the future
    imageManip = pipeline.create(dai.node.ImageManip)
    imageManip.initialConfig.setOutputSize(SIZE[0], SIZE[1] + 10) # To avoid a passthrough
    imageManip.setMaxOutputFrameSize(int(SIZE[0] * (SIZE[1] + 10) * 1.6))
    imageManip.inputImage.setMaxSize(12)
    output.link(imageManip.inputImage)
    output = imageManip.out

    benchmarkIn = pipeline.create(dai.node.BenchmarkIn)
    benchmarkIn.setRunOnHost(True)

    encoded = pipeline.create(dai.node.VideoEncoder).build(output,
            frameRate = FPS,
            profile = PROFILE)
    encoded.out.link(benchmarkIn.input)
    saver = pipeline.create(VideoSaver).build(encoded.out)

    pipeline.start()
    print("Started to save video to video.encoded")
    print("Press Ctrl+C to stop")
    timeStart = time.monotonic()
    while pipeline.isRunning() and not quitEvent.is_set():
        time.sleep(1)
    pipeline.stop()
    pipeline.wait()
    saver.file_handle.close()

print("To view the encoded data, convert the stream file (.encoded) into a video file (.mp4) using a command below:")
print(f"ffmpeg -framerate {FPS} -i video_hfr.encoded -c copy video_hfr.mp4")

print("If the FPS is not set correctly, you can ask ffmpeg to generate it with the command below")

print(f"""
ffmpeg -fflags +genpts -r {FPS} -i video_hfr.encoded \\
  -vsync cfr -fps_mode cfr \\
  -video_track_timescale {FPS}00 \\
  -c:v copy \\
  video_hfr.mp4
""")
```

### Need assistance?

Head over to [Discussion Forum](https://discuss.luxonis.com/) for technical support or any other questions you might have.
