Mono Full Resolution Saver¶
This example does its best to save 1280x720 .png files as fast at it can from the Mono sensor. It serves as an example of recording mono pictures to disk.
Be careful, this example saves pictures to your host storage. So if you leave it running, you could fill up your storage on your host.
Similar samples:
Demo¶
Setup¶
Please run the install script to download all required dependencies. Please note that this script must be ran from git context, so you have to download the depthai-python repository first and then run the script
git clone https://github.com/luxonis/depthai-python.git
cd depthai-python/examples
python3 install_requirements.py
For additional information, please follow installation guide
Source code¶
Also available on GitHub
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | #!/usr/bin/env python3 from pathlib import Path import cv2 import depthai as dai import time # Create pipeline pipeline = dai.Pipeline() # Define source and output monoRight = pipeline.create(dai.node.MonoCamera) xoutRight = pipeline.create(dai.node.XLinkOut) xoutRight.setStreamName("right") # Properties monoRight.setBoardSocket(dai.CameraBoardSocket.RIGHT) monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P) # Linking monoRight.out.link(xoutRight.input) # Connect to device and start pipeline with dai.Device(pipeline) as device: # Output queue will be used to get the grayscale frames from the output defined above qRight = device.getOutputQueue(name="right", maxSize=4, blocking=False) dirName = "mono_data" Path(dirName).mkdir(parents=True, exist_ok=True) while True: inRight = qRight.get() # Blocking call, will wait until a new data has arrived # Data is originally represented as a flat 1D array, it needs to be converted into HxW form # Frame is transformed and ready to be shown cv2.imshow("right", inRight.getCvFrame()) # After showing the frame, it's being stored inside a target directory as a PNG image cv2.imwrite(f"{dirName}/{int(time.time() * 1000)}.png", inRight.getFrame()) if cv2.waitKey(1) == ord('q'): break |
Also available on GitHub
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | #include <chrono> #include <iostream> // Includes common necessary includes for development using depthai library #include "depthai/depthai.hpp" #include "utility.hpp" int main() { using namespace std::chrono; // Create pipeline dai::Pipeline pipeline; // Define source and output auto monoRight = pipeline.create<dai::node::MonoCamera>(); auto xoutRight = pipeline.create<dai::node::XLinkOut>(); xoutRight->setStreamName("right"); // Properties monoRight->setBoardSocket(dai::CameraBoardSocket::RIGHT); monoRight->setResolution(dai::MonoCameraProperties::SensorResolution::THE_720_P); // Linking monoRight->out.link(xoutRight->input); // Connect to device and start pipeline dai::Device device(pipeline); // Output queue will be used to get the grayscale frames from the output defined above auto qRight = device.getOutputQueue("right", 4, false); std::string dirName = "mono_data"; createDirectory(dirName); while(true) { auto inRight = qRight->get<dai::ImgFrame>(); // Data is originally represented as a flat 1D array, it needs to be converted into HxW form // Frame is transformed and ready to be shown cv::imshow("right", inRight->getCvFrame()); uint64_t time = duration_cast<milliseconds>(system_clock::now().time_since_epoch()).count(); std::stringstream videoStr; videoStr << dirName << "/" << time << ".png"; // After showing the frame, it's being stored inside a target directory as a PNG image cv::imwrite(videoStr.str(), inRight->getCvFrame()); int key = cv::waitKey(1); if(key == 'q' || key == 'Q') { return 0; } } return 0; } |
Got questions?
We’re always happy to help with code or other questions you might have.