DepthAI
  • DepthAI Components
    • AprilTags
    • Benchmark
    • Camera
    • Calibration
    • DetectionNetwork
    • Events
    • FeatureTracker
    • Gate
    • HostNodes
    • ImageAlign
    • ImageManip
    • IMU
    • Misc
    • Model Zoo
    • NeuralDepth
    • NeuralNetwork
    • ObjectTracker
    • RecordReplay
    • RGBD
    • Script
    • SpatialDetectionNetwork
    • SpatialLocationCalculator
    • StereoDepth
    • Sync
    • VideoEncoder
    • Visualizer
    • Warp
    • RVC2-specific
  • Advanced Tutorials
  • API Reference
  • Tools
Software Stack

ON THIS PAGE

  • Demo output
  • Pipeline
  • Source Code

Video Encode

Supported on:RVC2RVC4
This example showcases how you can use Video Encoder node, which encodes video frames on-device into MJPEG, H264, or H265 video codecs. It creates custom Host Node called VideoSaver which receives encoded video frames from the Video Encoder node and saves them to a file on the host computer.After the end of the recording, user has to use ffmpeg to convert raw encoded stream into a playable video file. One could extend the VideoSaver node to save directly into a container like in the Save encoded video stream into mp4 container experiment.

Demo output

Command Line
1python3 video_encode.py
2Started to save video to video.encoded
3Press Ctrl+C to stop
4To view the encoded data, convert the stream file (.encoded) into a video file (.mp4) using a command below:
5ffmpeg -framerate 30 -i video.encoded -c copy video.mp4
After running the ffmpeg command, you should use VLC player to view the video file, especially for H265 format, as it is not supported by all video players (eg. QuickTime on MacOS).This example requires the DepthAI v3 API, see installation instructions.

Pipeline

Source Code

Python

Python
GitHub
1import depthai as dai
2
3# Capture Ctrl+C and set a flag to stop the loop
4import time
5import cv2
6import threading
7import signal
8
9PROFILE = dai.VideoEncoderProperties.Profile.MJPEG # or H265_MAIN, H264_MAIN
10
11quitEvent = threading.Event()
12signal.signal(signal.SIGTERM, lambda *_args: quitEvent.set())
13signal.signal(signal.SIGINT, lambda *_args: quitEvent.set())
14
15class VideoSaver(dai.node.HostNode):
16    def __init__(self, *args, **kwargs):
17        dai.node.HostNode.__init__(self, *args, **kwargs)
18        self.file_handle = open('video.encoded', 'wb')
19
20    def build(self, *args):
21        self.link_args(*args)
22        return self
23
24    def process(self, frame):
25        frame.getData().tofile(self.file_handle)
26
27with dai.Pipeline() as pipeline:
28    camRgb = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
29    output = camRgb.requestOutput((1920, 1440), type=dai.ImgFrame.Type.NV12)
30    outputQueue = output.createOutputQueue()
31    encoded = pipeline.create(dai.node.VideoEncoder).build(output,
32            frameRate = 30,
33            profile = PROFILE)
34    saver = pipeline.create(VideoSaver).build(encoded.out)
35
36    pipeline.start()
37    print("Started to save video to video.encoded")
38    print("Press Ctrl+C to stop")
39    timeStart = time.monotonic()
40    while pipeline.isRunning() and not quitEvent.is_set():
41        frame = outputQueue.get()
42        assert isinstance(frame, dai.ImgFrame)
43        cv2.imshow("video", frame.getCvFrame())
44        key = cv2.waitKey(1)
45        if key == ord('q'):
46            break
47    pipeline.stop()
48    pipeline.wait()
49    saver.file_handle.close()
50
51print("To view the encoded data, convert the stream file (.encoded) into a video file (.mp4) using a command below:")
52print("ffmpeg -framerate 30 -i video.encoded -c copy video.mp4")

C++

1#include <atomic>
2#include <csignal>
3#include <fstream>
4#include <iostream>
5#include <memory>
6#include <opencv2/opencv.hpp>
7#include <thread>
8
9#include "depthai/depthai.hpp"
10#include "depthai/pipeline/datatype/MessageGroup.hpp"
11
12// Global flag for graceful shutdown
13std::atomic<bool> quitEvent(false);
14
15// Signal handler
16void signalHandler(int signum) {
17    quitEvent = true;
18}
19
20// Custom host node for saving video data
21class VideoSaver : public dai::node::CustomNode<VideoSaver> {
22   public:
23    VideoSaver() : fileHandle("video.encoded", std::ios::binary) {
24        if(!fileHandle.is_open()) {
25            throw std::runtime_error("Could not open video.encoded for writing");
26        }
27    }
28
29    ~VideoSaver() {
30        if(fileHandle.is_open()) {
31            fileHandle.close();
32        }
33    }
34
35    std::shared_ptr<dai::Buffer> processGroup(std::shared_ptr<dai::MessageGroup> message) override {
36        if(!fileHandle.is_open()) return nullptr;
37
38        // Get raw data and write to file
39        auto frame = message->get<dai::EncodedFrame>("data");
40        unsigned char* frameData = frame->getData().data();
41        size_t frameSize = frame->getData().size();
42        std::cout << "Storing frame of size: " << frameSize << std::endl;
43        fileHandle.write(reinterpret_cast<const char*>(frameData), frameSize);
44
45        // Don't send anything back
46        return nullptr;
47    }
48
49   private:
50    std::ofstream fileHandle;
51};
52
53int main() {
54    // Set up signal handlers
55    signal(SIGTERM, signalHandler);
56    signal(SIGINT, signalHandler);
57
58    // Create device
59    std::shared_ptr<dai::Device> device = std::make_shared<dai::Device>();
60
61    // Create pipeline
62    dai::Pipeline pipeline(device);
63
64    // Create nodes
65    auto camRgb = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_A);
66    auto output = camRgb->requestOutput(std::make_pair(1920, 1440), dai::ImgFrame::Type::NV12);
67    auto outputQueue = output->createOutputQueue();
68
69    // Create video encoder node
70    auto encoded = pipeline.create<dai::node::VideoEncoder>();
71    encoded->setDefaultProfilePreset(30, dai::VideoEncoderProperties::Profile::MJPEG);
72    output->link(encoded->input);
73
74    // Create video saver node
75    auto saver = pipeline.create<VideoSaver>();
76    encoded->out.link(saver->inputs["data"]);
77
78    // Start pipeline
79    pipeline.start();
80    std::cout << "Started to save video to video.encoded" << std::endl;
81    std::cout << "Press Ctrl+C to stop" << std::endl;
82
83    auto timeStart = std::chrono::steady_clock::now();
84
85    while(pipeline.isRunning() && !quitEvent) {
86        auto frame = outputQueue->get<dai::ImgFrame>();
87        if(frame == nullptr) continue;
88
89        cv::imshow("video", frame->getCvFrame());
90
91        int key = cv::waitKey(1);
92        if(key == 'q') {
93            break;
94        }
95    }
96
97    // Cleanup
98    pipeline.stop();
99    pipeline.wait();
100
101    std::cout << "To view the encoded data, convert the stream file (.encoded) into a video file (.mp4) using a command below:" << std::endl;
102    std::cout << "ffmpeg -framerate 30 -i video.encoded -c copy video.mp4" << std::endl;
103
104    return 0;
105}

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.