# ColorCamera

ColorCamera node is a source of [ImgFrame](https://docs.luxonis.com/software/depthai-components/messages/img_frame.md). You can
control in at runtime with the InputControl and InputConfig.

## How to place it

#### Python

```python
pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.ColorCamera)
```

#### C++

```cpp
dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::ColorCamera>();
```

## Inputs and Outputs

Message types

 * inputConfig - [ImageManipConfig](https://docs.luxonis.com/software/depthai-components/messages/image_manip_config.md)
 * inputControl - [CameraControl](https://docs.luxonis.com/software/depthai-components/messages/camera_control.md)
 * raw - [ImgFrame](https://docs.luxonis.com/software/depthai-components/messages/img_frame.md) - RAW10 bayer data. Demo code for
   unpacking [here](https://github.com/luxonis/oak-examples/blob/3f1b2b2/gen2-color-isp-raw/main.py#L13-L32)
 * isp - [ImgFrame](https://docs.luxonis.com/software/depthai-components/messages/img_frame.md) - YUV420 planar (same as
   YU12/IYUV/I420)
 * still - [ImgFrame](https://docs.luxonis.com/software/depthai-components/messages/img_frame.md) - NV12, suitable for bigger size
   frames. The image gets created when a capture event is sent to the ColorCamera, so it's like taking a photo
 * preview - [ImgFrame](https://docs.luxonis.com/software/depthai-components/messages/img_frame.md) - RGB (or BGR
   planar/interleaved if configured), mostly suited for small size previews and to feed the image into
   [NeuralNetwork](https://docs.luxonis.com/software/depthai-components/nodes/neural_network.md)
 * video - [ImgFrame](https://docs.luxonis.com/software/depthai-components/messages/img_frame.md) - NV12, suitable for bigger size
   frames

ISP (image signal processor) is used for bayer transformation, demosaicing, noise reduction, and other image enhancements. It
interacts with the 3A algorithms: auto-focus, auto-exposure, and auto-white-balance, which are handling image sensor adjustments
such as exposure time, sensitivity (ISO), and lens position (if the camera module has a motorized lens) at runtime. Click
[here](https://en.wikipedia.org/wiki/Image_processor) for more information.

Image Post-Processing converts YUV420 planar frames from the ISP into video/preview/still frames.

still (when a capture is triggered) and isp work at the max camera resolution, while video and preview are limited to max 4K (3840
x 2160) resolution, which is cropped from isp. For IMX378 (12MP), the post-processing works like this:

The image above is the isp output from the ColorCamera (12MP resolution from IMX378). If you aren't downscaling ISP, the video
output is cropped to 4k (max 3840x2160 due to the limitation of the video output) as represented by the blue rectangle. The Yellow
rectangle represents a cropped preview output when the preview size is set to a 1:1 aspect ratio (eg. when using a 300x300 preview
size for the MobileNet-SSD NN model) because the preview output is derived from the video output.

## Usage

#### Python

```python
pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.ColorCamera)
cam.setPreviewSize(300, 300)
cam.setBoardSocket(dai.CameraBoardSocket.CAM_A)
cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
cam.setInterleaved(False)
cam.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
```

#### C++

```cpp
dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::ColorCamera>();
cam->setPreviewSize(300, 300);
cam->setBoardSocket(dai::CameraBoardSocket::CAM_A);
cam->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
cam->setInterleaved(false);
cam->setColorOrder(dai::ColorCameraProperties::ColorOrder::RGB);
```

## 3A Algorithms

The 3A - Auto-Exposure (AE), Auto-White Balance (AWB), and Auto-Focus (AF) - algorithms are used to optimize image quality and run
directly on RVC. By default, these settings are in AUTO mode, with limits (e.g., min/max exposure) specific to each sensor (see
[supported sensors](https://docs.luxonis.com/hardware/platform/sensors/sensors.md) for details).

You can manually control these settings either by following the steps in [RGB camera control
example](https://docs.luxonis.com/software/depthai/examples/rgb_camera_control.md) or by using the [cam_test.py
script](https://github.com/luxonis/depthai-python/blob/main/utilities/cam_test.py).

 * Stereo Cameras: Sensors share the same I2C bus, ensuring synchronized 3A settings automatically (AWB, AE).
 * Independent Sensors: On setups like OAK FFC or OAK-D-LR, where each sensor has its own I2C, the 3a-follow feature can be used
   to synchronize 3A settings from one sensor to others.

Example Usage

```python
cam['cam_b'].initialControl.setMisc("3a-follow", dai.CameraBoardSocket.CAM_A)
cam['cam_c'].initialControl.setMisc("3a-follow", dai.CameraBoardSocket.CAM_A)
```

The 3a-follow feature copies the 3A settings (exposure, ISO, and white balance) from a primary camera (e.g., CAM_A) to other
cameras in the setup (e.g., CAM_B and CAM_C).

## Limitations

Here are known camera limitations for the [RVC2](https://docs.luxonis.com/hardware/platform/rvc/rvc2.md#rvc2):

 * ISP can process about 600 MP/s, and about 500 MP/s when the pipeline is also running NNs and video encoder in parallel
 * 3A algorithms can process about 200..250 FPS overall (for all camera streams). This is a current limitation of our
   implementation, and we have plans for a workaround to run 3A algorithms on every Xth frame, no ETA yet
 * ISP Scaling numerator value can be 1..16 and denominator value 1..32 for both vertical and horizontal scaling. So you can
   downscale eg. 12MP (4056x3040) only to resolutions [calculated
   here](https://docs.google.com/spreadsheets/d/153yTstShkJqsPbkPOQjsVRmM8ZO3A6sCqm7uayGF-EE/edit#gid=0)

## Examples of functionality

 * [RGB Preview](https://docs.luxonis.com/software/depthai/examples/rgb_preview.md)
 * [RGB Camera Control](https://docs.luxonis.com/software/depthai/examples/rgb_camera_control.md)
 * [RGB video](https://docs.luxonis.com/software/depthai/examples/rgb_video.md)

## Reference

### depthai.node.ColorCamera(depthai.Node)

Kind: Class

ColorCamera node. For use with color sensors.

#### getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocket

Kind: Method

Retrieves which board socket to use

Returns:
Board socket to use

#### getCamId(self) -> int: int

Kind: Method

#### getCamera(self) -> str: str

Kind: Method

Retrieves which camera to use by name

Returns:
Name of the camera to use

#### getColorOrder(self) -> depthai.ColorCameraProperties.ColorOrder: depthai.ColorCameraProperties.ColorOrder

Kind: Method

Get color order of preview output frames. RGB or BGR

#### getFp16(self) -> bool: bool

Kind: Method

Get fp16 (0..255) data of preview output frames

#### getFps(self) -> float: float

Kind: Method

Get rate at which camera should produce frames

Returns:
Rate in frames per second

#### getFrameEventFilter(self) -> list[depthai.FrameEvent]: list[depthai.FrameEvent]

Kind: Method

#### getImageOrientation(self) -> depthai.CameraImageOrientation: depthai.CameraImageOrientation

Kind: Method

Get camera image orientation

#### getInterleaved(self) -> bool: bool

Kind: Method

Get planar or interleaved data of preview output frames

#### getIspHeight(self) -> int: int

Kind: Method

Get 'isp' output height

#### getIspNumFramesPool(self) -> int: int

Kind: Method

Get number of frames in isp pool

#### getIspSize(self) -> tuple[int, int]: tuple[int, int]

Kind: Method

Get 'isp' output resolution as size, after scaling

#### getIspWidth(self) -> int: int

Kind: Method

Get 'isp' output width

#### getPreviewHeight(self) -> int: int

Kind: Method

Get preview height

#### getPreviewKeepAspectRatio(self) -> bool: bool

Kind: Method

See also:
setPreviewKeepAspectRatio

Returns:
Preview keep aspect ratio option

#### getPreviewNumFramesPool(self) -> int: int

Kind: Method

Get number of frames in preview pool

#### getPreviewSize(self) -> tuple[int, int]: tuple[int, int]

Kind: Method

Get preview size as tuple

#### getPreviewWidth(self) -> int: int

Kind: Method

Get preview width

#### getRawNumFramesPool(self) -> int: int

Kind: Method

Get number of frames in raw pool

#### getResolution(self) -> depthai.ColorCameraProperties.SensorResolution: depthai.ColorCameraProperties.SensorResolution

Kind: Method

Get sensor resolution

#### getResolutionHeight(self) -> int: int

Kind: Method

Get sensor resolution height

#### getResolutionSize(self) -> tuple[int, int]: tuple[int, int]

Kind: Method

Get sensor resolution as size

#### getResolutionWidth(self) -> int: int

Kind: Method

Get sensor resolution width

#### getSensorCrop(self) -> tuple[float, float]: tuple[float, float]

Kind: Method

Returns:
Sensor top left crop coordinates

#### getSensorCropX(self) -> float: float

Kind: Method

Get sensor top left x crop coordinate

#### getSensorCropY(self) -> float: float

Kind: Method

Get sensor top left y crop coordinate

#### getStillHeight(self) -> int: int

Kind: Method

Get still height

#### getStillNumFramesPool(self) -> int: int

Kind: Method

Get number of frames in still pool

#### getStillSize(self) -> tuple[int, int]: tuple[int, int]

Kind: Method

Get still size as tuple

#### getStillWidth(self) -> int: int

Kind: Method

Get still width

#### getVideoHeight(self) -> int: int

Kind: Method

Get video height

#### getVideoNumFramesPool(self) -> int: int

Kind: Method

Get number of frames in video pool

#### getVideoSize(self) -> tuple[int, int]: tuple[int, int]

Kind: Method

Get video size as tuple

#### getVideoWidth(self) -> int: int

Kind: Method

Get video width

#### getWaitForConfigInput(self) -> bool: bool

Kind: Method

See also:
setWaitForConfigInput

Returns:
True if wait for inputConfig message, false otherwise

#### sensorCenterCrop(self)

Kind: Method

Specify sensor center crop. Resolution size / video size

#### setBoardSocket(self, boardSocket: depthai.CameraBoardSocket)

Kind: Method

Specify which board socket to use

Parameter ``boardSocket``:
Board socket to use

#### setCamId(self, arg0: typing.SupportsInt)

Kind: Method

#### setCamera(self, name: str)

Kind: Method

Specify which camera to use by name

Parameter ``name``:
Name of the camera to use

#### setColorOrder(self, colorOrder: depthai.ColorCameraProperties.ColorOrder)

Kind: Method

Set color order of preview output images. RGB or BGR

#### setFp16(self, fp16: bool)

Kind: Method

Set fp16 (0..255) data type of preview output frames

#### setFps(self, fps: typing.SupportsFloat)

Kind: Method

Set rate at which camera should produce frames

Parameter ``fps``:
Rate in frames per second

#### setFrameEventFilter(self, events: collections.abc.Sequence [ depthai.FrameEvent ])

Kind: Method

#### setImageOrientation(self, imageOrientation: depthai.CameraImageOrientation)

Kind: Method

Set camera image orientation

#### setInterleaved(self, interleaved: bool)

Kind: Method

Set planar or interleaved data of preview output frames

#### setIsp3aFps(self, isp3aFps: typing.SupportsInt)

Kind: Method

Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls
etc.). Default (0) matches the camera FPS, meaning that 3A is running on each
frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases
the convergence rate of 3A. Note that camera controls will be processed at this
rate. E.g. if camera is running at 30 fps, and camera control is sent at every
frame, but 3A fps is set to 15, the camera control messages will be processed at
15 fps rate, which will lead to queueing.

#### setIspNumFramesPool(self, arg0: typing.SupportsInt)

Kind: Method

Set number of frames in isp pool

#### setIspScale()

Kind: Method

#### setNumFramesPool(self, raw: typing.SupportsInt, isp: typing.SupportsInt, preview: typing.SupportsInt, video:
typing.SupportsInt, still: typing.SupportsInt)

Kind: Method

Set number of frames in all pools

#### setPreviewKeepAspectRatio(self, keep: bool)

Kind: Method

Specifies whether preview output should preserve aspect ratio, after downscaling
from video size or not.

Parameter ``keep``:
If true, a larger crop region will be considered to still be able to create
the final image in the specified aspect ratio. Otherwise video size is
resized to fit preview size

#### setPreviewNumFramesPool(self, arg0: typing.SupportsInt)

Kind: Method

Set number of frames in preview pool

#### setPreviewSize()

Kind: Method

#### setRawNumFramesPool(self, arg0: typing.SupportsInt)

Kind: Method

Set number of frames in raw pool

#### setRawOutputPacked(self, packed: bool)

Kind: Method

Configures whether the camera `raw` frames are saved as MIPI-packed to memory.
The packed format is more efficient, consuming less memory on device, and less
data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on
3 bytes. When packing is disabled (`false`), data is saved lsb-aligned, e.g. a
RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000'00pp'pppp'pppp.
Default is auto: enabled for standard color/monochrome cameras where ISP can
work with both packed/unpacked, but disabled for other cameras like ToF.

#### setResolution(self, resolution: depthai.ColorCameraProperties.SensorResolution)

Kind: Method

Set sensor resolution

#### setSensorCrop(self, x: typing.SupportsFloat, y: typing.SupportsFloat)

Kind: Method

Specifies the cropping that happens when converting ISP to video output. By
default, video will be center cropped from the ISP output. Note that this
doesn't actually do on-sensor cropping (and MIPI-stream only that region), but
it does postprocessing on the ISP (on RVC).

Parameter ``x``:
Top left X coordinate

Parameter ``y``:
Top left Y coordinate

#### setStillNumFramesPool(self, arg0: typing.SupportsInt)

Kind: Method

Set number of frames in preview pool

#### setStillSize()

Kind: Method

#### setVideoNumFramesPool(self, arg0: typing.SupportsInt)

Kind: Method

Set number of frames in preview pool

#### setVideoSize()

Kind: Method

#### setWaitForConfigInput(self, wait: bool)

Kind: Method

Specify to wait until inputConfig receives a configuration message, before
sending out a frame.

Parameter ``wait``:
True to wait for inputConfig message, false otherwise

#### frameEvent

Kind: Property

Outputs metadata-only ImgFrame message as an early indicator of an incoming
frame.

It's sent on the MIPI SoF (start-of-frame) event, just after the exposure of the
current frame has finished and before the exposure for next frame starts. Could
be used to synchronize various processes with camera capture. Fields populated:
camera id, sequence number, timestamp

#### initialControl

Kind: Property

Initial control options to apply to sensor

#### inputConfig

Kind: Property

Input for ImageManipConfig message, which can modify crop parameters in runtime

Default queue is non-blocking with size 8

#### inputControl

Kind: Property

Input for CameraControl message, which can modify camera parameters in runtime

Default queue is blocking with size 8

#### isp

Kind: Property

Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data.

Generated by the ISP engine, and the source for the 'video', 'preview' and
'still' outputs

#### preview

Kind: Property

Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame
data.

Suitable for use with NeuralNetwork node

#### raw

Kind: Property

Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame
data.

Captured directly from the camera sensor, and the source for the 'isp' output.

#### still

Kind: Property

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane
interleaved) frame data.

The message is sent only when a CameraControl message arrives to inputControl
with captureStill command set.

#### video

Kind: Property

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane
interleaved) frame data.

Suitable for use with VideoEncoder node

### Need assistance?

Head over to [Discussion Forum](https://discuss.luxonis.com/) for technical support or any other questions you might have.
