ON THIS PAGE

  • How to place it
  • Inputs and Outputs
  • Configuration
  • Coordinate system
  • 1. Camera socket coordinate system
  • 2. Housing coordinate system
  • 3. Custom transformation matrix
  • Interaction between options
  • Usage
  • Point cloud from stereo depth
  • Colorized point cloud
  • Examples
  • Reference

PointCloud

Supported on:RVC2RVC4
Computes a 3D point cloud from a depth map. Optionally takes an aligned color input to produce colorized point clouds.

How to place it

Python

Python
1import depthai as dai
2
3with dai.Pipeline() as pipeline:
4    pointCloud = pipeline.create(dai.node.PointCloud)

C++

C++
1dai::Pipeline pipeline;
2auto pointCloud = pipeline.create<dai::node::PointCloud>();

Inputs and Outputs

passthroughDepth forwards the original depth frame used to compute a given PointCloudData output. Useful when input queues are non-blocking and you need to correlate depth frames with their corresponding point clouds.

Configuration

All settings can be set via initialConfig before pipeline start, or sent at runtime through inputConfig. Sending a new config mid-stream is safe — the node reinitializes automatically when configuration, calibration, or frame transformation changes.See PointCloudConfig for detailed descriptions and usage examples.

Python

Python
1pc = pipeline.create(dai.node.PointCloud)
2pc.initialConfig.setOrganized(True)
3pc.initialConfig.setLengthUnit(dai.LengthUnit.METER)
4pc.initialConfig.setTargetCoordinateSystem(dai.CameraBoardSocket.CAM_A)
5pc.setNumFramesPool(8)

C++

C++
1auto pc = pipeline.create<dai::node::PointCloud>();
2pc->initialConfig->setOrganized(true);
3pc->initialConfig->setLengthUnit(dai::LengthUnit::METER);
4pc->initialConfig->setTargetCoordinateSystem(dai::CameraBoardSocket::CAM_A);
5pc->setNumFramesPool(8);

Coordinate system

By default the node reads T_frame_to_ref from the depth frame's ImgTransformation extrinsics and applies it, so the output point cloud is in the reference (origin) camera coordinate system. On top of that, you can apply one of three additional transformations:

1. Camera socket coordinate system

Transform the point cloud into the coordinate system of another camera on the device (e.g. CameraBoardSocket.CAM_A). The origin is at the camera sensor, rotation follows the camera's optical frame. The node uses the device calibration to compute the extrinsic transformation automatically.
Python
1# Point cloud in the coordinate system of the color camera (CAM_A)
2pc.initialConfig.setTargetCoordinateSystem(dai.CameraBoardSocket.CAM_A)

2. Housing coordinate system

Transform the point cloud into a housing coordinate system. This uses the device's housing coordinate definitions to express all points relative to a physical point on the device enclosure.
Python
1# Point cloud in the VESA mount coordinate system
2pc.initialConfig.setTargetCoordinateSystem(dai.HousingCoordinateSystem.VESA_A)
Available housing coordinate systems:
  • HousingCoordinateSystem.CAM_ACAM_D — Origin at the same point as CameraBoardSocket.CAM_ACAM_D. The difference is rotation: the X-Y plane is parallel to the front glass of the device, whereas camera socket coordinate systems are rotated to the camera's optical frame.
  • HousingCoordinateSystem.FRONT_CAM_AFRONT_CAM_D — Positioned similarly to HousingCoordinateSystem.CAM_ACAM_D, but shifted forward to the front side of the front glass.
  • HousingCoordinateSystem.VESA_AVESA_J — Device mounting points.
  • HousingCoordinateSystem.IMU — IMU sensor origin.

3. Custom transformation matrix

Apply an arbitrary 4×4 transformation matrix representing T_ref_to_custom — the transform from the reference camera to your custom coordinate system. The node composes it with the frame extrinsics to get the final applied transform:
Python
1# 90° rotation around Z axis
2transform = [
3    [0.0, -1.0, 0.0, 0.0],
4    [1.0,  0.0, 0.0, 0.0],
5    [0.0,  0.0, 1.0, 0.0],
6    [0.0,  0.0, 0.0, 1.0],
7]
8pc.initialConfig.setTransformationMatrix(transform)
If no coordinate system target is set and the transformation matrix is identity, only the frame extrinsics (T_frame_to_ref) are applied, bringing points into the reference camera frame.

Interaction between options

The three modes are mutually exclusive. Setting a camera socket or housing coordinate system sets the coordinate system type to CAMERA_SOCKET or HOUSING respectively — in those modes the custom transformation matrix from setTransformationMatrix is ignored. The custom matrix is only used when the coordinate system type is DEFAULT (i.e. neither setTargetCoordinateSystem overload was called).All three methods can also be called on the node directly (forwarded to initialConfig):
Python
1pc.setTargetCoordinateSystem(dai.CameraBoardSocket.CAM_A)
2# or
3pc.setTargetCoordinateSystem(dai.HousingCoordinateSystem.VESA_A)

Usage

Point cloud from stereo depth

Generate a basic point cloud from stereo depth.

Python

Python
1import depthai as dai
2
3pipeline = dai.Pipeline()
4
5left = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
6right = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
7stereo = pipeline.create(dai.node.StereoDepth)
8left.requestOutput((640, 400)).link(stereo.left)
9right.requestOutput((640, 400)).link(stereo.right)
10
11pc = pipeline.create(dai.node.PointCloud)
12pc.initialConfig.setLengthUnit(dai.LengthUnit.METER)
13stereo.depth.link(pc.inputDepth)
14
15q = pc.outputPointCloud.createOutputQueue(maxSize=4, blocking=False)
16
17with pipeline:
18    pipeline.start()
19    while pipeline.isRunning():
20        pclData = q.get()
21        points = pclData.getPoints()  # np.ndarray (N, 3) float32
22        print(f"Points: {len(points)}, Z=[{pclData.getMinZ():.2f}, {pclData.getMaxZ():.2f}]")

C++

C++
1#include <iostream>
2#include "depthai/depthai.hpp"
3
4int main() {
5    dai::Pipeline pipeline;
6
7    auto left = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_B);
8    auto right = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_C);
9    auto stereo = pipeline.create<dai::node::StereoDepth>();
10    left->requestOutput(std::make_pair(640, 400))->link(stereo->left);
11    right->requestOutput(std::make_pair(640, 400))->link(stereo->right);
12
13    auto pc = pipeline.create<dai::node::PointCloud>();
14    pc->initialConfig->setLengthUnit(dai::LengthUnit::METER);
15    stereo->depth.link(pc->inputDepth);
16
17    auto q = pc->outputPointCloud.createOutputQueue(4, false);
18
19    pipeline.start();
20    while(pipeline.isRunning()) {
21        auto pclData = q->get<dai::PointCloudData>();
22        auto points = pclData->getPoints();
23        std::cout << "Points: " << points.size()
24                  << ", Z=[" << pclData->getMinZ() << ", " << pclData->getMaxZ() << "]" << std::endl;
25    }
26    pipeline.stop();
27    return 0;
28}

Colorized point cloud

Link depth and color images aligned to the same coordinate system. Typically the depth is aligned to the color camera.

Python

Python
1import depthai as dai
2
3pipeline = dai.Pipeline()
4
5left = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
6right = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
7color = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
8
9stereo = pipeline.create(dai.node.StereoDepth)
10left.requestFullResolutionOutput().link(stereo.left)
11right.requestFullResolutionOutput().link(stereo.right)
12
13colorOut = color.requestOutput((640, 400), type=dai.ImgFrame.Type.RGB888i,
14                               resizeMode=dai.ImgResizeMode.CROP, enableUndistortion=True)
15
16pc = pipeline.create(dai.node.PointCloud)
17pc.initialConfig.setLengthUnit(dai.LengthUnit.METER)
18
19# Align depth to the color camera
20platform = pipeline.getDefaultDevice().getPlatform()
21if platform == dai.Platform.RVC4:
22    imageAlign = pipeline.create(dai.node.ImageAlign)
23    stereo.depth.link(imageAlign.input)
24    colorOut.link(imageAlign.inputAlignTo)
25    imageAlign.outputAligned.link(pc.inputDepth)
26else:
27    colorOut.link(stereo.inputAlignTo)
28    stereo.depth.link(pc.inputDepth)
29
30colorOut.link(pc.inputColor)
31
32q = pc.outputPointCloud.createOutputQueue(maxSize=4, blocking=False)
33
34with pipeline:
35    pipeline.start()
36    while pipeline.isRunning():
37        pcd = q.get()
38        if pcd.isColor():
39            xyz, rgba = pcd.getPointsRGB()
40            print(f"Points: {len(xyz)}, color=yes, Z=[{pcd.getMinZ():.2f}, {pcd.getMaxZ():.2f}]")

C++

C++
1#include <iostream>
2#include "depthai/depthai.hpp"
3
4int main() {
5    dai::Pipeline pipeline;
6
7    auto left = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_B);
8    auto right = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_C);
9    auto color = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_A);
10
11    auto stereo = pipeline.create<dai::node::StereoDepth>();
12    left->requestFullResolutionOutput()->link(stereo->left);
13    right->requestFullResolutionOutput()->link(stereo->right);
14
15    auto colorOut = color->requestOutput(std::make_pair(640, 400), dai::ImgFrame::Type::RGB888i,
16                                         dai::ImgResizeMode::CROP, std::nullopt, true);
17
18    auto pc = pipeline.create<dai::node::PointCloud>();
19    pc->initialConfig->setLengthUnit(dai::LengthUnit::METER);
20
21    auto platform = pipeline.getDefaultDevice()->getPlatform();
22    if(platform == dai::Platform::RVC4) {
23        auto imageAlign = pipeline.create<dai::node::ImageAlign>();
24        stereo->depth.link(imageAlign->input);
25        colorOut->link(imageAlign->inputAlignTo);
26        imageAlign->outputAligned.link(pc->inputDepth);
27    } else {
28        colorOut->link(stereo->inputAlignTo);
29        stereo->depth.link(pc->inputDepth);
30    }
31
32    colorOut->link(pc->getColorInput());
33
34    auto q = pc->outputPointCloud.createOutputQueue(4, false);
35
36    pipeline.start();
37    while(pipeline.isRunning()) {
38        auto pcd = q->get<dai::PointCloudData>();
39        if(pcd->isColor()) {
40            auto points = pcd->getPointsRGB();
41            std::cout << "Points: " << points.size() << ", color=yes"
42                      << ", Z=[" << pcd->getMinZ() << ", " << pcd->getMaxZ() << "]" << std::endl;
43        }
44    }
45    pipeline.stop();
46    return 0;
47}

Examples

Reference

class

dai::node::PointCloud

#include PointCloud.hpp
variable
std::shared_ptr< PointCloudConfig > initialConfig
Initial config to use when computing the point cloud.
variable
Input inputConfig
Input PointCloudConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
variable
Subnode< node::Sync > sync
Sync subnode for synchronized depth + color input. When only depth is connected, Sync passes through single-item MessageGroups. When both depth and color are connected, Sync pairs them by timestamp.
variable
InputMap & syncInputs
variable
Input & inputDepth
Input message with depth data used to create the point cloud. Routed through the internal Sync subnode.
variable
Output outputPointCloud
Outputs PointCloudData message
variable
Output passthroughDepth
Passthrough depth from which the point cloud was calculated. Suitable for when input queue is set to non-blocking behavior.
function
PointCloud()
function
~PointCloud()
function
Input & getColorInput()
Get the optional color input for colorized point clouds. Lazily creates the Sync entry so that depth-only mode works without Sync waiting for a color frame that never arrives.Link an aligned color image (RGB888i, same dimensions as depth) to this input to enable colored point cloud output.
function
void setNumFramesPool(int numFramesPool)
Specify number of frames in pool.
Parameters
  • numFramesPool: How many frames should the pool have
function
void setRunOnHost(bool runOnHost)
Specify whether to run on host or device By default, the node will run on host.
function
void useCPU()
Use single-threaded CPU for processing
function
void useCPUMT(uint32_t numThreads)
Use multi-threaded CPU for processing
function
void useGPU(uint32_t device)
Use GPU for point cloud computation
Parameters
  • device: GPU device index (default 0)
function
void setTargetCoordinateSystem(CameraBoardSocket targetCamera, bool useSpecTranslation)
Set target coordinate system to transform point cloud
Parameters
  • targetCamera: Target camera socket
  • useSpecTranslation: Use spec translation instead of calibration (default: false)
function
void setTargetCoordinateSystem(HousingCoordinateSystem housingCS, bool useSpecTranslation)
Set target coordinate system to housing coordinate system Point cloud will be transformed to this housing coordinate system
Parameters
  • housingCS: Target housing coordinate system
  • useSpecTranslation: Whether to use spec translation (default: true)
function
bool runOnHost()
function
void buildInternal()
class

dai::node::PointCloud::Impl

variable
LengthUnit targetLengthUnit
function
Impl()
function
void setLogger(std::shared_ptr<::spdlog::logger > log)
function
void computePointCloudDense(const uint8_t * depthData, std::vector< Point3f > & points)
function
void computePointCloudDenseColored(const uint8_t * depthData, const uint8_t * colorData, std::vector< Point3fRGBA > & points)
function
void applyTransformation(std::vector< PointT > & points)
function
std::vector< PointT > filterValidPoints(const std::vector< PointT > & densePoints)
function
void setLengthUnit(dai::LengthUnit lengthUnit)
function
void useCPU()
function
void useCPUMT(uint32_t numThreads)
function
void useGPU(uint32_t device)
function
void setIntrinsics(float fx, float fy, float cx, float cy, unsigned int width, unsigned int height)
function
void setExtrinsics(const std::vector< std::vector< float >> & transformMatrix)
function
void clearExtrinsics()

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.