MonoCamera

MonoCamera node is a source of image frames. You can control in at runtime with the inputControl. Some DepthAI modules don’t have mono camera(s). Two mono cameras are used to calculate stereo depth (with StereoDepth node).

How to place it

pipeline = dai.Pipeline()
mono = pipeline.create(dai.node.MonoCamera)
dai::Pipeline pipeline;
auto mono = pipeline.create<dai::node::MonoCamera>();

Inputs and Outputs

               ┌─────────────────┐
               │                 │         out
inputControl   │                 ├───────────►
──────────────►│    MonoCamera   |         raw
               │                 ├───────────►
               │                 │
               └─────────────────┘

Message types

Usage

pipeline = dai.Pipeline()
mono = pipeline.create(dai.node.MonoCamera)
mono.setCamera("right")
mono.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
dai::Pipeline pipeline;
auto mono = pipeline.create<dai::node::MonoCamera>();
mono->setCamera("right");
mono->setResolution(dai::MonoCameraProperties::SensorResolution::THE_720_P);

Reference

class depthai.node.MonoCamera

MonoCamera node. For use with grayscale sensors.

class Connection

Connection between an Input and Output

class Id

Node identificator. Unique for every node on a single Pipeline

Properties

alias of depthai.MonoCameraProperties

property frameEvent

Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.

It’s sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp

getAssetManager(*args, **kwargs)

Overloaded function.

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

Get node AssetManager as a const reference

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

Get node AssetManager as a const reference

getBoardSocket(self: depthai.node.MonoCamera)depthai.CameraBoardSocket

Retrieves which board socket to use

Returns

Board socket to use

getCamId(self: depthai.node.MonoCamera)int
getCamera(self: depthai.node.MonoCamera)str

Retrieves which camera to use by name

Returns

Name of the camera to use

getFps(self: depthai.node.MonoCamera)float

Get rate at which camera should produce frames

Returns

Rate in frames per second

getFrameEventFilter(self: depthai.node.MonoCamera) → List[depthai.FrameEvent]
getImageOrientation(self: depthai.node.MonoCamera)depthai.CameraImageOrientation

Get camera image orientation

getInputRefs(*args, **kwargs)

Overloaded function.

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

Retrieves reference to node inputs

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

Retrieves reference to node inputs

getInputs(self: depthai.Node) → List[depthai.Node.Input]

Retrieves all nodes inputs

getName(self: depthai.Node)str

Retrieves nodes name

getNumFramesPool(self: depthai.node.MonoCamera)int

Get number of frames in main (ISP output) pool

getOutputRefs(*args, **kwargs)

Overloaded function.

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

Retrieves reference to node outputs

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

Retrieves reference to node outputs

getOutputs(self: depthai.Node) → List[depthai.Node.Output]

Retrieves all nodes outputs

getParentPipeline(*args, **kwargs)

Overloaded function.

  1. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

  2. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

getRawNumFramesPool(self: depthai.node.MonoCamera)int

Get number of frames in raw pool

getResolution(self: depthai.node.MonoCamera)depthai.MonoCameraProperties.SensorResolution

Get sensor resolution

getResolutionHeight(self: depthai.node.MonoCamera)int

Get sensor resolution height

getResolutionSize(self: depthai.node.MonoCamera) → Tuple[int, int]

Get sensor resolution as size

getResolutionWidth(self: depthai.node.MonoCamera)int

Get sensor resolution width

property id

Id of node

property initialControl

Initial control options to apply to sensor

property inputControl

Input for CameraControl message, which can modify camera parameters in runtime Default queue is blocking with size 8

property out

Outputs ImgFrame message that carries RAW8 encoded (grayscale) frame data.

Suitable for use StereoDepth node. Processed by ISP

property raw

Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.

Captured directly from the camera sensor

setBoardSocket(self: depthai.node.MonoCamera, boardSocket: depthai.CameraBoardSocket)None

Specify which board socket to use

Parameter boardSocket:

Board socket to use

setCamId(self: depthai.node.MonoCamera, arg0: int)None
setCamera(self: depthai.node.MonoCamera, name: str)None

Specify which camera to use by name

Parameter name:

Name of the camera to use

setFps(self: depthai.node.MonoCamera, fps: float)None

Set rate at which camera should produce frames

Parameter fps:

Rate in frames per second

setFrameEventFilter(self: depthai.node.MonoCamera, events: List[depthai.FrameEvent])None
setImageOrientation(self: depthai.node.MonoCamera, imageOrientation: depthai.CameraImageOrientation)None

Set camera image orientation

setIsp3aFps(self: depthai.node.MonoCamera, arg0: int)None

Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.

setNumFramesPool(self: depthai.node.MonoCamera, arg0: int)None

Set number of frames in main (ISP output) pool

setRawNumFramesPool(self: depthai.node.MonoCamera, arg0: int)None

Set number of frames in raw pool

setRawOutputPacked(self: depthai.node.MonoCamera, packed: bool)None

Configures whether the camera raw frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (false), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000’00pp’pppp’pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.

setResolution(self: depthai.node.MonoCamera, resolution: depthai.MonoCameraProperties.SensorResolution)None

Set sensor resolution

class dai::node::MonoCamera : public dai::NodeCRTP<Node, MonoCamera, MonoCameraProperties>

MonoCamera node. For use with grayscale sensors.

Public Functions

MonoCamera(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)
MonoCamera(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId, std::unique_ptr<Properties> props)
void setBoardSocket(CameraBoardSocket boardSocket)

Specify which board socket to use

Parameters
  • boardSocket: Board socket to use

CameraBoardSocket getBoardSocket() const

Retrieves which board socket to use

Return

Board socket to use

void setCamera(std::string name)

Specify which camera to use by name

Parameters
  • name: Name of the camera to use

std::string getCamera() const

Retrieves which camera to use by name

Return

Name of the camera to use

void setCamId(int64_t id)
int64_t getCamId() const
void setImageOrientation(CameraImageOrientation imageOrientation)

Set camera image orientation.

CameraImageOrientation getImageOrientation() const

Get camera image orientation.

void setResolution(Properties::SensorResolution resolution)

Set sensor resolution.

Properties::SensorResolution getResolution() const

Get sensor resolution.

void setFrameEventFilter(const std::vector<dai::FrameEvent> &events)
std::vector<dai::FrameEvent> getFrameEventFilter() const
void setFps(float fps)

Set rate at which camera should produce frames

Parameters
  • fps: Rate in frames per second

void setIsp3aFps(int isp3aFps)

Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.

float getFps() const

Get rate at which camera should produce frames

Return

Rate in frames per second

std::tuple<int, int> getResolutionSize() const

Get sensor resolution as size.

int getResolutionWidth() const

Get sensor resolution width.

int getResolutionHeight() const

Get sensor resolution height.

void setNumFramesPool(int num)

Set number of frames in main (ISP output) pool.

void setRawNumFramesPool(int num)

Set number of frames in raw pool.

int getNumFramesPool() const

Get number of frames in main (ISP output) pool.

int getRawNumFramesPool() const

Get number of frames in raw pool.

void setRawOutputPacked(bool packed)

Configures whether the camera raw frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (false), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000’00pp’pppp’pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.

Public Members

CameraControl initialControl

Initial control options to apply to sensor

Input inputControl = {*this, "inputControl", Input::Type::SReceiver, true, 8, {{DatatypeEnum::CameraControl, false}}}

Input for CameraControl message, which can modify camera parameters in runtime Default queue is blocking with size 8

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW8 encoded (grayscale) frame data.

Suitable for use StereoDepth node. Processed by ISP

Output raw = {*this, "raw", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.

Captured directly from the camera sensor

Output frameEvent = {*this, "frameEvent", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.

It’s sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp

Public Static Attributes

static constexpr const char *NAME = "MonoCamera"

Private Members

std::shared_ptr<RawCameraControl> rawControl

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.