RGB video¶
This example shows how to use high resolution video at low latency. Compared to RGB Preview, this demo outputs NV12 frames whereas preview frames are BGR and are not suited for larger resoulution (eg. 1920x1080). Preview is more suitable for either NN or visualization purposes.
Similar samples:
RGB Preview (lower resolution)
Demo¶
Setup¶
Please run the install script to download all required dependencies. Please note that this script must be ran from git context, so you have to download the depthai-python repository first and then run the script
git clone https://github.com/luxonis/depthai-python.git
cd depthai-python/examples
python3 install_requirements.py
For additional information, please follow installation guide
Source code¶
Also available on GitHub
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | #!/usr/bin/env python3 import cv2 import depthai as dai # Create pipeline pipeline = dai.Pipeline() # Define source and output camRgb = pipeline.create(dai.node.ColorCamera) xoutVideo = pipeline.create(dai.node.XLinkOut) xoutVideo.setStreamName("video") # Properties camRgb.setBoardSocket(dai.CameraBoardSocket.RGB) camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P) camRgb.setVideoSize(1920, 1080) xoutVideo.input.setBlocking(False) xoutVideo.input.setQueueSize(1) # Linking camRgb.video.link(xoutVideo.input) # Connect to device and start pipeline with dai.Device(pipeline) as device: video = device.getOutputQueue(name="video", maxSize=1, blocking=False) while True: videoIn = video.get() # Get BGR frame from NV12 encoded video frame to show with opencv # Visualizing the frame on slower hosts might have overhead cv2.imshow("video", videoIn.getCvFrame()) if cv2.waitKey(1) == ord('q'): break |
Also available on GitHub
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | #include <iostream> // Includes common necessary includes for development using depthai library #include "depthai/depthai.hpp" int main() { // Create pipeline dai::Pipeline pipeline; // Define source and output auto camRgb = pipeline.create<dai::node::ColorCamera>(); auto xoutVideo = pipeline.create<dai::node::XLinkOut>(); xoutVideo->setStreamName("video"); // Properties camRgb->setBoardSocket(dai::CameraBoardSocket::RGB); camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P); camRgb->setVideoSize(1920, 1080); xoutVideo->input.setBlocking(false); xoutVideo->input.setQueueSize(1); // Linking camRgb->video.link(xoutVideo->input); // Connect to device and start pipeline dai::Device device(pipeline); auto video = device.getOutputQueue("video"); while(true) { auto videoIn = video->get<dai::ImgFrame>(); // Get BGR frame from NV12 encoded video frame to show with opencv // Visualizing the frame on slower hosts might have overhead cv::imshow("video", videoIn->getCvFrame()); int key = cv::waitKey(1); if(key == 'q' || key == 'Q') { return 0; } } return 0; } |
Got questions?
We’re always happy to help with code or other questions you might have.