Camera
Camera node is a source of ImgFrame. You can control in at runtime with theInputControl
and InputConfig
It aims to unify the Color Camera and MonoCamera into one node.Compared to Color Camera node, Camera node:- Supports cam.setSize(), which replaces both
cam.setResolution()
andcam.setIspScale()
. Camera node will automatically find resolution that fits best, and apply correct scaling to achieve user-selected size - Supports cam.setCalibrationAlpha(), example here: Undistort camera stream
- Supports cam.loadMeshData() and cam.setMeshStep(), which can be used for custom image warping (undistortion, perspective correction, etc.)
- Automatically undistorts camera stream if HFOV of the camera is greater than 85°. You can disable this with:
cam.setMeshSource(dai.CameraProperties.WarpMeshSource.NONE)
.
- Doesn't have
out
output, as it has the same outputs as ColorCamera (raw
,isp
,still
,preview
,video
). This means thatpreview
will output 3 planes of the same grayscale frame (3x overhead), andisp
/video
/still
will output luma (useful grayscale information) + chroma (all values are 128), which will result in 1.5x bandwidth overhead
How to place it
Python
C++
Python
1pipeline = dai.Pipeline()
2cam = pipeline.create(dai.node.Camera)
Inputs and Outputs
Command Line
1\
2 ┌─────────────────────── ───────┐
3 │ ┌─────────────┐ │
4 │ │ Image │ raw │ raw
5 │ │ Sensor │---┬--------├────────►
6 │ └────▲────────┘ | │
7 │ │ ┌--------┘ │
8 │ ┌─┴───▼─┐ │ isp
9 inputControl │ │ │-------┬-------├────────►
10 ──────────────►│------│ ISP │ ┌─────▼────┐ │ video
11 │ │ │ | |--├────────►
12 │ └───────┘ │ Image │ │ still
13 inputConfig │ │ Post- │--├────────►
14 ────────────── ►│----------------|Processing│ │ preview
15 │ │ │--├────────►
16 │ └──────────┘ │
17 └──────────────────────────────┘
inputConfig
- ImageManipConfiginputControl
- CameraControlraw
- ImgFrame - RAW10 bayer data. Demo code for unpacking hereisp
- ImgFrame - YUV420 planar (same as YU12/IYUV/I420)still
- ImgFrame - NV12, suitable for bigger size frames. The image gets created when a capture event is sent to the Camera, so it's like taking a photopreview
- ImgFrame - RGB (or BGR planar/interleaved if configured), mostly suited for small size previews and to feed the image into NeuralNetworkvideo
- ImgFrame - NV12, suitable for bigger size frames
video
/preview
/still
frames.still
(when a capture is triggered) and isp
work at the max camera resolution, while video
and preview
are limited to max 4K (3840 x 2160) resolution, which is cropped from isp
. For IMX378 (12MP), the post-processing works like this:Command Line
1┌─────┐ Cropping to ┌─────────┐ Downscaling ┌──────────┐
2│ ISP ├────────────────►│ video ├───────────────►│ preview │
3└─────┘ max 3840x2160 └─────────┘ and cropping └──────────┘
isp
output from the Camera (12MP resolution from IMX378). If you aren't downscaling ISP, the video
output is cropped to 4k (max 3840x2160 due to the limitation of the video
output) as represented by the blue rectangle. The Yellow rectangle represents a cropped preview
output when the preview size is set to a 1:1 aspect ratio (eg. when using a 300x300 preview size for the MobileNet-SSD NN model) because the preview
output is derived from the video
output.Usage
Python
C++
Python
1pipeline = dai.Pipeline()
2cam = pipeline.create(dai.node.Camera)
3cam.setPreviewSize(300, 300)
4cam.setBoardSocket(dai.CameraBoardSocket.CAM_A)
5# Instead of setting the resolution, user can specify size, which will set
6# sensor resolution to best fit, and also apply scaling
7cam.setSize(1280, 720)
Limitations
Here are known camera limitations for the RVC2:- ISP can process about 600 MP/s, and about 500 MP/s when the pipeline is also running NNs and video encoder in parallel
- 3A algorithms can process about 200..250 FPS overall (for all camera streams). This is a current limitation of our implementation, and we have plans for a workaround to run 3A algorithms on every Xth frame, no ETA yet
Examples of functionality
Reference
class
depthai.node.Camera(depthai.Node)
method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocket
Retrieves which board socket to use Returns: Board socket to use
method
getCalibrationAlpha(self) -> float|None: float|None
Get calibration alpha parameter that determines FOV of undistorted frames
method
getCamera(self) -> str: str
Retrieves which camera to use by name Returns: Name of the camera to use
method
getFps(self) -> float: float
Get rate at which camera should produce frames Returns: Rate in frames per second
method
getHeight(self) -> int: int
Get sensor resolution height
method
getImageOrientation(self) -> depthai.CameraImageOrientation: depthai.CameraImageOrientation
Get camera image orientation
method
getMeshSource(self) -> depthai.CameraProperties.WarpMeshSource: depthai.CameraProperties.WarpMeshSource
Gets the source of the warp mesh
method
getMeshStep(self) -> tuple[int, int]: tuple[int, int]
Gets the distance between mesh points
method
getPreviewHeight(self) -> int: int
Get preview height
method
getPreviewSize(self) -> tuple[int, int]: tuple[int, int]
Get preview size as tuple
method
getPreviewWidth(self) -> int: int
Get preview width
method
getSize(self) -> tuple[int, int]: tuple[int, int]
Get sensor resolution as size
method
getStillHeight(self) -> int: int
Get still height
method
getStillSize(self) -> tuple[int, int]: tuple[int, int]
Get still size as tuple
method
getStillWidth(self) -> int: int
Get still width
method
getVideoHeight(self) -> int: int
Get video height
method
getVideoSize(self) -> tuple[int, int]: tuple[int, int]
Get video size as tuple
method
getVideoWidth(self) -> int: int
Get video width
method
getWidth(self) -> int: int
Get sensor resolution width
method
loadMeshData(self, warpMesh: typing_extensions.Buffer)
Specify mesh calibration data for undistortion See `loadMeshFiles` for the expected data format
method
loadMeshFile(self, warpMesh: Path)
Specify local filesystem paths to the undistort mesh calibration files. When a mesh calibration is set, it overrides the camera intrinsics/extrinsics matrices. Overrides useHomographyRectification behavior. Mesh format: a sequence of (y,x) points as 'float' with coordinates from the input image to be mapped in the output. The mesh can be subsampled, configured by `setMeshStep`. With a 1280x800 resolution and the default (16,16) step, the required mesh size is: width: 1280 / 16 + 1 = 81 height: 800 / 16 + 1 = 51
method
setBoardSocket(self, boardSocket: depthai.CameraBoardSocket)
Specify which board socket to use Parameter ``boardSocket``: Board socket to use
method
setCalibrationAlpha(self, alpha: float)
Set calibration alpha parameter that determines FOV of undistorted frames
method
setCamera(self, name: str)
Specify which camera to use by name Parameter ``name``: Name of the camera to use
method
setFps(self, fps: float)
Set rate at which camera should produce frames Parameter ``fps``: Rate in frames per second
method
setImageOrientation(self, imageOrientation: depthai.CameraImageOrientation)
Set camera image orientation
method
setIsp3aFps(self, isp3aFps: int)
Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.
method
setMeshSource(self, source: depthai.CameraProperties.WarpMeshSource)
Set the source of the warp mesh or disable
method
setMeshStep(self, width: int, height: int)
Set the distance between mesh points. Default: (32, 32)
method
method
setRawOutputPacked(self, packed: bool)
Configures whether the camera `raw` frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (`false`), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000'00pp'pppp'pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.
method
method
method
property
frameEvent
Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.
property
initialControl
Initial control options to apply to sensor
property
inputConfig
Input for ImageManipConfig message, which can modify crop parameters in runtime
property
inputControl
Input for CameraControl message, which can modify camera parameters in runtime
property
isp
Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data.
property
preview
Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame data.
property
raw
Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.
property
still
Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.
property
video
Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.
Need assistance?
Head over to Discussion Forum for technical support or any other questions you might have.