Camera
Camera
is a single, unified source node that replaces the separate ColorCamera and MonoCamera nodes. It produces ImgFrame
messages which can be used for image processing and neural network inference.Key Features
- Auto‑selection of sensor resolution & FPS when you don’t specify them — no more manual bookkeeping.
requestOutput()
Py
1cam.requestOutput(size=(640, 480), type=dai.ImgFrame.Type.NV12,
2 resize_mode=dai.ImgResizeMode.CROP,
3 enableUndistortion=True)
- Auto undistortion with
enableUndistortion
flag (default =None
). requestFullResolutionOutput()
with safety guard (stays ≤ 5 K×4 K unless you passuseHighestResolution=True
).setMockIsp()
for synthetic or recorded inputs (attach aReplayVideo
node to feed pre‑captured frames).
ColorCamera
or MonoCamera
? Just search‑and‑replace the node creation and delete any manual setIspScale()
logic.Placing the node
Python
C++
Python
Python
1with dai.Pipeline() as pipeline:
2 cam = pipeline.create(dai.node.Camera)
3 cam.build(dai.CameraBoardSocket.CAM_A) # optional — autodetects otherwise
Inputs & Outputs
Resizing modes

- Crop: No NN accuracy decrease. Cons: Frame is cropped, so it's not full FOV.
- Letterbox: Preserves full FOV. Cons: Smaller "frame" means less features might decrease NN accuracy.
- Stretch: Preserves full FOV. Cons: Due to stretched frames, NNs accuracy might decrease.
Python
1cam.requestOutput(size=(640, 480), type=dai.ImgFrame.Type.NV12,
2 resize_mode=dai.ImgResizeMode.CROP)
3cam.requestOutput(size=(640, 480), type=dai.ImgFrame.Type.NV12,
4 resize_mode=dai.ImgResizeMode.RESIZE)
5cam.requestOutput(size=(640, 480), type=dai.ImgFrame.Type.NV12,
6 resize_mode=dai.ImgResizeMode.LETTERBOX)
Usage Examples
Python
C++
Python
Python
1pipeline = dai.Pipeline()
2
3cam = pipeline.create(dai.node.Camera)
4cam.build(boardSocket=dai.CameraBoardSocket.CAM_A)
5
6# 1) Low‑latency preview for video encoder
7nn_in = cam.requestOutput(size=(300,300), type=dai.ImgFrame.Type.NV12, fps=30)
8
9# 2) HD stream for recording
10hd_out = cam.requestOutput(size=(1280,720), type=dai.ImgFrame.Type.BGR888p, fps=30)
11
12# 3) Full‑res stills every second
13full = cam.requestFullResolutionOutput(type=dai.ImgFrame.Type.BGR888p, fps=1)
14
15# Link to downstream nodes …
Platform‑Specific Limits
RVC2
RVC4
RVC2
ISP ~ 600 MP/s sustained (≈ 4 K @ 30 fps) → budget accordingly when running heavy NNs and video encoder. 3A runs on the embedded micro‑DSP and tops out at ~ 250 fps aggregated across all camera streams.Further Examples
API Reference
class
dai::node::Camera
variable
CameraControl initialControl
Initial control options to apply to sensor
variable
Input inputControl
variable
Input mockIsp
Input for mocking 'isp' functionality on RVC2. Default queue is blocking with size 8
variable
Output raw
Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.Captured directly from the camera sensor, and the source for the 'isp' output.
variable
OutputMap dynamicOutputs
function
Node::Output * requestOutput(std::pair< uint32_t, uint32_t > size, std::optional< ImgFrame::Type > type, ImgResizeMode resizeMode, std::optional< float > fps, std::optional< bool > enableUndistortion)
Get video output with specified size.
function
Node::Output * requestOutput(const Capability & capability, bool onHost)
Request output with advanced controls. Mainly to be used by custom node writers.
function
Node::Output * requestFullResolutionOutput(std::optional< ImgFrame::Type > type, std::optional< float > fps, bool useHighestResolution)
Get a high resolution output with full FOV on the sensor. By default the function will not use the resolutions higher than 5000x4000, as those often need a lot of resources, making them hard to use in combination with other nodes.
Parameters
- type: Type of the output (NV12, BGR, ...) - by default it's auto-selected for best performance
- fps: FPS of the output - by default it's auto-selected to highest possible that a sensor config support or 30, whichever is lower
- useHighestResolution: If true, the function will use the highest resolution available on the sensor, even if it's higher than 5000x4000
function
std::shared_ptr< Camera > build(dai::CameraBoardSocket boardSocket, std::optional< std::pair< uint32_t, uint32_t >> sensorResolution, std::optional< float > sensorFps)
Build with a specific board socket
Parameters
- boardSocket: Board socket to use
- sensorResolution: Sensor resolution to use - by default it's auto-detected from the requested outputs
- sensorFps: Sensor FPS to use - by default it's auto-detected from the requested outputs (maximum is used)
function
std::shared_ptr< Camera > build(dai::CameraBoardSocket boardSocket, ReplayVideo & replay)
Build with a specific board socket and mock input
function
std::shared_ptr< Camera > build(ReplayVideo & replay)
Build with mock input
function
uint32_t getMaxWidth()
Get max width of the camera (can only be called after build)
function
uint32_t getMaxHeight()
Get max height of the camera (can only be called after build)
function
CameraBoardSocket getBoardSocket()
Retrieves which board socket to use
Returns
Board socket to use
function
Camera & setMockIsp(ReplayVideo & replay)
Set mock ISP for Camera node. Automatically sets mockIsp size.
Parameters
- replay: ReplayVideo node to use as mock ISP
function
Camera()
explicit function
Camera(std::shared_ptr< Device > & defaultDevice)
explicit function
Camera(std::unique_ptr< Properties > props)
function
void buildStage1()
function
float getMaxRequestedFps()
function
uint32_t getMaxRequestedWidth()
function
uint32_t getMaxRequestedHeight()
inline function
DeviceNodeCRTP()
inline function
DeviceNodeCRTP(const std::shared_ptr< Device > & device)
inline function
DeviceNodeCRTP(std::unique_ptr< Properties > props)
inline function
DeviceNodeCRTP(std::unique_ptr< Properties > props, bool confMode)
inline function
DeviceNodeCRTP(const std::shared_ptr< Device > & device, std::unique_ptr< Properties > props, bool confMode)
Need assistance?
Head over to Discussion Forum for technical support or any other questions you might have.