Encoding Max Limit
This example shows how to set up the encoder node to encode the RGB camera and both grayscale cameras (of DepthAI/OAK-D) at the same time, having all encoder parameters set to maximum quality and FPS. The RGB is set to 4K (3840x2160) and the grayscale are set to 1280x720 each, all at 25FPS. Each encoded video stream is transferred over XLINK and saved to a respective file.Pressing Ctrl+C will stop the recording and then convert it using ffmpeg into an mp4 to make it playable. Note that ffmpeg will need to be installed and runnable for the conversion to mp4 to succeed.Be careful, this example saves encoded video to your host storage. So if you leave it running, you could fill up your storage on your host.It's a variation of RGB Encoding and RGB & Mono Encoding.Similar samples:
- RGB Encoding
- RGB & Mono Encoding
- RGB Encoding & MobilenetSSD
- RGB Encoding & Mono & MobilenetSSD
- RGB Encoding & Mono with MobilenetSSD & Depth
Demo
Setup
Please run the install script to download all required dependencies. Please note that this script must be ran from git context, so you have to download the depthai-python repository first and then run the scriptCommand Line
1git clone https://github.com/luxonis/depthai-python.git
2cd depthai-python/examples
3python3 install_requirements.py
Source code
Python
C++
Python
PythonGitHub
1#!/usr/bin/env python3
2
3import depthai as dai
4
5# Create pipeline
6pipeline = dai.Pipeline()
7
8# Define sources and outputs
9camRgb = pipeline.create(dai.node.ColorCamera)
10monoLeft = pipeline.create(dai.node.MonoCamera)
11monoRight = pipeline.create(dai.node.MonoCamera)
12ve1 = pipeline.create(dai.node.VideoEncoder)
13ve2 = pipeline.create(dai.node.VideoEncoder)
14ve3 = pipeline.create(dai.node.VideoEncoder)
15
16ve1Out = pipeline.create(dai.node.XLinkOut)
17ve2Out = pipeline.create(dai.node.XLinkOut)
18ve3Out = pipeline.create(dai.node.XLinkOut)
19
20ve1Out.setStreamName('ve1Out')
21ve2Out.setStreamName('ve2Out')
22ve3Out.setStreamName('ve3Out')
23
24# Properties
25camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
26camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_4_K)
27monoLeft.setCamera("left")
28monoRight.setCamera("right")
29
30# Setting to 26fps will trigger error
31ve1.setDefaultProfilePreset(25, dai.VideoEncoderProperties.Profile.H264_MAIN)
32ve2.setDefaultProfilePreset(25, dai.VideoEncoderProperties.Profile.H265_MAIN)
33ve3.setDefaultProfilePreset(25, dai.VideoEncoderProperties.Profile.H264_MAIN)
34
35# Linking
36monoLeft.out.link(ve1.input)
37camRgb.video.link(ve2.input)
38monoRight.out.link(ve3.input)
39
40ve1.bitstream.link(ve1Out.input)
41ve2.bitstream.link(ve2Out.input)
42ve3.bitstream.link(ve3Out.input)
43
44# Connect to device and start pipeline
45with dai.Device(pipeline) as dev:
46
47 # Output queues will be used to get the encoded data from the output defined above
48 outQ1 = dev.getOutputQueue('ve1Out', maxSize=30, blocking=True)
49 outQ2 = dev.getOutputQueue('ve2Out', maxSize=30, blocking=True)
50 outQ3 = dev.getOutputQueue('ve3Out', maxSize=30, blocking=True)
51
52 # Processing loop
53 with open('mono1.h264', 'wb') as fileMono1H264, open('color.h265', 'wb') as fileColorH265, open('mono2.h264', 'wb') as fileMono2H264:
54 print("Press Ctrl+C to stop encoding...")
55 while True:
56 try:
57 # Empty each queue
58 while outQ1.has():
59 outQ1.get().getData().tofile(fileMono1H264)
60
61 while outQ2.has():
62 outQ2.get().getData().tofile(fileColorH265)
63
64 while outQ3.has():
65 outQ3.get().getData().tofile(fileMono2H264)
66 except KeyboardInterrupt:
67 break
68
69 print("To view the encoded data, convert the stream file (.h264/.h265) into a video file (.mp4), using commands below:")
70 cmd = "ffmpeg -framerate 25 -i {} -c copy {}"
71 print(cmd.format("mono1.h264", "mono1.mp4"))
72 print(cmd.format("mono2.h264", "mono2.mp4"))
73 print(cmd.format("color.h265", "color.mp4"))
Pipeline
Need assistance?
Head over to Discussion Forum for technical support or any other questions you might have.