Software Stack
DepthAI
  • DepthAI Components
    • AprilTags
    • Benchmark
    • Camera
    • DetectionNetwork
    • EdgeDetector
    • Events
    • FeatureTracker
    • HostNodes
    • ImageAlign
    • ImageManip
    • IMU
    • Misc
    • Modelzoo
    • NeuralNetwork
    • RecordReplay
    • RGBD
    • Script
    • SpatialDetectionNetwork
    • SpatialLocationCalculator
    • StereoDepth
    • Sync
    • SystemLogger
    • VideoEncoder
    • Visualizer
    • Warp
    • RVC2-specific
  • Advanced Tutorials
  • API Reference
  • Tools

ON THIS PAGE

  • Video Encode
  • Demo output
  • Setup
  • Pipeline
  • Source Code

Video Encode

This example showcases how you can use Video Encoder node, which encodes video frames on-device into MJPEG, H264, or H265 video codecs. It creates custom Host Node called VideoSaver which receives encoded video frames from the Video Encoder node and saves them to a file on the host computer.After the end of the recording, user has to use ffmpeg to convert raw encoded stream into a playable video file. One could extend the VideoSaver node to save directly into a container like in the Save encoded video stream into mp4 container experiment.

Demo output

Command Line
1python3 video_encode.py
2Started to save video to video.encoded
3Press Ctrl+C to stop
4To view the encoded data, convert the stream file (.encoded) into a video file (.mp4) using a command below:
5ffmpeg -framerate 30 -i video.encoded -c copy video.mp4
After running the ffmpeg command, you should use VLC player to view the video file, especially for H265 format, as it is not supported by all video players (eg. QuickTime on MacOS).

Setup

This example requires the DepthAI v3 API, see installation instructions.

Pipeline

Source Code

Python
C++

Python

Python
GitHub
1import depthai as dai
2
3# Capture Ctrl+C and set a flag to stop the loop
4import time
5import cv2
6import threading
7import signal
8
9PROFILE = dai.VideoEncoderProperties.Profile.MJPEG # or H265_MAIN, H264_MAIN
10
11quitEvent = threading.Event()
12signal.signal(signal.SIGTERM, lambda *_args: quitEvent.set())
13signal.signal(signal.SIGINT, lambda *_args: quitEvent.set())
14
15class VideoSaver(dai.node.HostNode):
16    def __init__(self, *args, **kwargs):
17        dai.node.HostNode.__init__(self, *args, **kwargs)
18        self.file_handle = open('video.encoded', 'wb')
19
20    def build(self, *args):
21        self.link_args(*args)
22        return self
23
24    def process(self, frame):
25        frame.getData().tofile(self.file_handle)
26
27with dai.Pipeline() as pipeline:
28    camRgb = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
29    output = camRgb.requestOutput((1920, 1440), type=dai.ImgFrame.Type.NV12)
30    outputQueue = output.createOutputQueue()
31    encoded = pipeline.create(dai.node.VideoEncoder).build(output,
32            frameRate = 30,
33            profile = PROFILE)
34    saver = pipeline.create(VideoSaver).build(encoded.out)
35
36    pipeline.start()
37    print("Started to save video to video.encoded")
38    print("Press Ctrl+C to stop")
39    timeStart = time.monotonic()
40    while pipeline.isRunning() and not quitEvent.is_set():
41        frame = outputQueue.get()
42        assert isinstance(frame, dai.ImgFrame)
43        cv2.imshow("video", frame.getCvFrame())
44        key = cv2.waitKey(1)
45        if key == ord('q'):
46            break
47    pipeline.stop()
48    pipeline.wait()
49    saver.file_handle.close()
50
51print("To view the encoded data, convert the stream file (.encoded) into a video file (.mp4) using a command below:")
52print("ffmpeg -framerate 30 -i video.encoded -c copy video.mp4")

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.