Runs on the host by default. On RVC4 you can offload it to the device with
setRunOnHost(False). On RVC2 only host-side processing is supported. Added in DepthAI v3.6.0.How to place it
Python
Python
1import depthai as dai
2
3with dai.Pipeline() as pipeline:
4 pointCloud = pipeline.create(dai.node.PointCloud)Inputs and Outputs
passthroughDepth forwards the original depth frame used to compute a given PointCloudData output. Useful when input queues are non-blocking and you need to correlate depth frames with their corresponding point clouds.Configuration
initialConfig before pipeline start, or sent at runtime through inputConfig. Sending a new config mid-stream is safe — the node reinitializes automatically when configuration, calibration, or frame transformation changes.See PointCloudConfig for detailed descriptions and usage examples.Python
Python
1pc = pipeline.create(dai.node.PointCloud)
2pc.initialConfig.setOrganized(True)
3pc.initialConfig.setLengthUnit(dai.LengthUnit.METER)
4pc.initialConfig.setTargetCoordinateSystem(dai.CameraBoardSocket.CAM_A)
5pc.setNumFramesPool(8)Coordinate system
T_frame_to_ref from the depth frame's ImgTransformation extrinsics and applies it, so the output point cloud is in the reference (origin) camera coordinate system. On top of that, you can apply one of three additional transformations:1. Camera socket coordinate system
CameraBoardSocket.CAM_A). The origin is at the camera sensor, rotation follows the camera's optical frame. The node uses the device calibration to compute the extrinsic transformation automatically.Python
1# Point cloud in the coordinate system of the color camera (CAM_A)
2pc.initialConfig.setTargetCoordinateSystem(dai.CameraBoardSocket.CAM_A)2. Housing coordinate system
Python
1# Point cloud in the VESA mount coordinate system
2pc.initialConfig.setTargetCoordinateSystem(dai.HousingCoordinateSystem.VESA_A)HousingCoordinateSystem.CAM_A–CAM_D— Origin at the same point asCameraBoardSocket.CAM_A–CAM_D. The difference is rotation: the X-Y plane is parallel to the front glass of the device, whereas camera socket coordinate systems are rotated to the camera's optical frame.HousingCoordinateSystem.FRONT_CAM_A–FRONT_CAM_D— Positioned similarly toHousingCoordinateSystem.CAM_A–CAM_D, but shifted forward to the front side of the front glass.HousingCoordinateSystem.VESA_A–VESA_J— Device mounting points.HousingCoordinateSystem.IMU— IMU sensor origin.
3. Custom transformation matrix
T_ref_to_custom — the transform from the reference camera to your custom coordinate system. The node composes it with the frame extrinsics to get the final applied transform:Python
1# 90° rotation around Z axis
2transform = [
3 [0.0, -1.0, 0.0, 0.0],
4 [1.0, 0.0, 0.0, 0.0],
5 [0.0, 0.0, 1.0, 0.0],
6 [0.0, 0.0, 0.0, 1.0],
7]
8pc.initialConfig.setTransformationMatrix(transform)T_frame_to_ref) are applied, bringing points into the reference camera frame.Interaction between options
CAMERA_SOCKET or HOUSING respectively — in those modes the custom transformation matrix from setTransformationMatrix is ignored. The custom matrix is only used when the coordinate system type is DEFAULT (i.e. neither setTargetCoordinateSystem overload was called).All three methods can also be called on the node directly (forwarded to initialConfig):Python
1pc.setTargetCoordinateSystem(dai.CameraBoardSocket.CAM_A)
2# or
3pc.setTargetCoordinateSystem(dai.HousingCoordinateSystem.VESA_A)Usage
Point cloud from stereo depth
Python
Python
1import depthai as dai
2
3pipeline = dai.Pipeline()
4
5left = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
6right = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
7stereo = pipeline.create(dai.node.StereoDepth)
8left.requestOutput((640, 400)).link(stereo.left)
9right.requestOutput((640, 400)).link(stereo.right)
10
11pc = pipeline.create(dai.node.PointCloud)
12pc.initialConfig.setLengthUnit(dai.LengthUnit.METER)
13stereo.depth.link(pc.inputDepth)
14
15q = pc.outputPointCloud.createOutputQueue(maxSize=4, blocking=False)
16
17with pipeline:
18 pipeline.start()
19 while pipeline.isRunning():
20 pclData = q.get()
21 points = pclData.getPoints() # np.ndarray (N, 3) float32
22 print(f"Points: {len(points)}, Z=[{pclData.getMinZ():.2f}, {pclData.getMaxZ():.2f}]")Colorized point cloud
Python
Python
1import depthai as dai
2
3pipeline = dai.Pipeline()
4
5left = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
6right = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
7color = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
8
9stereo = pipeline.create(dai.node.StereoDepth)
10left.requestFullResolutionOutput().link(stereo.left)
11right.requestFullResolutionOutput().link(stereo.right)
12
13colorOut = color.requestOutput((640, 400), type=dai.ImgFrame.Type.RGB888i,
14 resizeMode=dai.ImgResizeMode.CROP, enableUndistortion=True)
15
16pc = pipeline.create(dai.node.PointCloud)
17pc.initialConfig.setLengthUnit(dai.LengthUnit.METER)
18
19# Align depth to the color camera
20platform = pipeline.getDefaultDevice().getPlatform()
21if platform == dai.Platform.RVC4:
22 imageAlign = pipeline.create(dai.node.ImageAlign)
23 stereo.depth.link(imageAlign.input)
24 colorOut.link(imageAlign.inputAlignTo)
25 imageAlign.outputAligned.link(pc.inputDepth)
26else:
27 colorOut.link(stereo.inputAlignTo)
28 stereo.depth.link(pc.inputDepth)
29
30colorOut.link(pc.inputColor)
31
32q = pc.outputPointCloud.createOutputQueue(maxSize=4, blocking=False)
33
34with pipeline:
35 pipeline.start()
36 while pipeline.isRunning():
37 pcd = q.get()
38 if pcd.isColor():
39 xyz, rgba = pcd.getPointsRGB()
40 print(f"Points: {len(xyz)}, color=yes, Z=[{pcd.getMinZ():.2f}, {pcd.getMaxZ():.2f}]")Examples
- PointCloud — Minimal colorized point cloud example.
- PointCloud Visualizer — Live 3D visualization with Open3D.
- PointCloud Showcase — Demonstrates filtered, organized, coordinate transforms, custom matrix, and colorized modes.
Reference
class
dai::node::PointCloud
variable
std::shared_ptr< PointCloudConfig > initialConfig
Initial config to use when computing the point cloud.
variable
Input inputConfig
Input PointCloudConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
variable
Subnode< node::Sync > sync
variable
InputMap & syncInputs
variable
Input & inputDepth
Input message with depth data used to create the point cloud. Routed through the internal Sync subnode.
variable
Output outputPointCloud
variable
Output passthroughDepth
Passthrough depth from which the point cloud was calculated. Suitable for when input queue is set to non-blocking behavior.
function
PointCloud()function
~PointCloud()function
Input & getColorInput()function
void setNumFramesPool(int numFramesPool)Specify number of frames in pool.
Parameters
- numFramesPool: How many frames should the pool have
function
void setRunOnHost(bool runOnHost)Specify whether to run on host or device By default, the node will run on host.
function
void useCPU()Use single-threaded CPU for processing
function
void useCPUMT(uint32_t numThreads)Use multi-threaded CPU for processing
function
void useGPU(uint32_t device)Use GPU for point cloud computation
Parameters
- device: GPU device index (default 0)
function
void setTargetCoordinateSystem(CameraBoardSocket targetCamera, bool useSpecTranslation)Set target coordinate system to transform point cloud
Parameters
- targetCamera: Target camera socket
- useSpecTranslation: Use spec translation instead of calibration (default: false)
function
void setTargetCoordinateSystem(HousingCoordinateSystem housingCS, bool useSpecTranslation)Set target coordinate system to housing coordinate system Point cloud will be transformed to this housing coordinate system
Parameters
- housingCS: Target housing coordinate system
- useSpecTranslation: Whether to use spec translation (default: true)
function
bool runOnHost()function
void buildInternal()class
dai::node::PointCloud::Impl
variable
LengthUnit targetLengthUnit
function
Impl()function
void setLogger(std::shared_ptr<::spdlog::logger > log)function
void computePointCloudDense(const uint8_t * depthData, std::vector< Point3f > & points)function
void computePointCloudDenseColored(const uint8_t * depthData, const uint8_t * colorData, std::vector< Point3fRGBA > & points)function
void applyTransformation(std::vector< PointT > & points)function
std::vector< PointT > filterValidPoints(const std::vector< PointT > & densePoints)function
void setLengthUnit(dai::LengthUnit lengthUnit)function
void useCPU()function
void useCPUMT(uint32_t numThreads)function
void useGPU(uint32_t device)function
void setIntrinsics(float fx, float fy, float cx, float cy, unsigned int width, unsigned int height)function
void setExtrinsics(const std::vector< std::vector< float >> & transformMatrix)function
void clearExtrinsics()Need assistance?
Head over to Discussion Forum for technical support or any other questions you might have.