# PointCloud

Computes a 3D point cloud from a depth map. Optionally takes an aligned color input to produce colorized point clouds.

> Runs on the
> **host**
> by default. On
> **RVC4**
> you can offload it to the device with
> `setRunOnHost(False)`
> . On
> **RVC2**
> only host-side processing is supported. Added in DepthAI
> **v3.6.0**
> .

## How to place it

#### Python

```python
import depthai as dai

with dai.Pipeline() as pipeline:
    pointCloud = pipeline.create(dai.node.PointCloud)
```

#### C++

```cpp
dai::Pipeline pipeline;
auto pointCloud = pipeline.create<dai::node::PointCloud>();
```

## Inputs and Outputs

passthroughDepth forwards the original depth frame used to compute a given PointCloudData output. Useful when input queues are
non-blocking and you need to correlate depth frames with their corresponding point clouds.

## Configuration

All settings can be set via initialConfig before pipeline start, or sent at runtime through inputConfig. Sending a new config
mid-stream is safe — the node reinitializes automatically when configuration, calibration, or frame transformation changes.

See [PointCloudConfig](https://docs.luxonis.com/software-v3/depthai/depthai-components/messages/pointcloud_config.md) for detailed
descriptions and usage examples.

#### Python

```python
pc = pipeline.create(dai.node.PointCloud)
pc.initialConfig.setOrganized(True)
pc.initialConfig.setLengthUnit(dai.LengthUnit.METER)
pc.initialConfig.setTargetCoordinateSystem(dai.CameraBoardSocket.CAM_A)
pc.setNumFramesPool(8)
```

#### C++

```cpp
auto pc = pipeline.create<dai::node::PointCloud>();
pc->initialConfig->setOrganized(true);
pc->initialConfig->setLengthUnit(dai::LengthUnit::METER);
pc->initialConfig->setTargetCoordinateSystem(dai::CameraBoardSocket::CAM_A);
pc->setNumFramesPool(8);
```

## Coordinate system

By default the node reads T_frame_to_ref from the depth frame's ImgTransformation extrinsics and applies it, so the output point
cloud is in the reference (origin) camera coordinate system. On top of that, you can apply one of three additional
transformations:

### 1. Camera socket coordinate system

Transform the point cloud into the coordinate system of another camera on the device (e.g. CameraBoardSocket.CAM_A). The origin is
at the camera sensor, rotation follows the camera's optical frame. The node uses the device calibration to compute the extrinsic
transformation automatically.

```python
# Point cloud in the coordinate system of the color camera (CAM_A)
pc.initialConfig.setTargetCoordinateSystem(dai.CameraBoardSocket.CAM_A)
```

### 2. Housing coordinate system

Transform the point cloud into a housing coordinate system. This uses the device's housing coordinate definitions to express all
points relative to a physical point on the device enclosure.

```python
# Point cloud in the VESA mount coordinate system
pc.initialConfig.setTargetCoordinateSystem(dai.HousingCoordinateSystem.VESA_A)
```

Available housing coordinate systems:

 * HousingCoordinateSystem.CAM_A – CAM_D — Origin at the same point as CameraBoardSocket.CAM_A – CAM_D. The difference is
   rotation: the X-Y plane is parallel to the front glass of the device, whereas camera socket coordinate systems are rotated to
   the camera's optical frame.
 * HousingCoordinateSystem.FRONT_CAM_A – FRONT_CAM_D — Positioned similarly to HousingCoordinateSystem.CAM_A – CAM_D, but shifted
   forward to the front side of the front glass.
 * HousingCoordinateSystem.VESA_A – VESA_J — Device mounting points.
 * HousingCoordinateSystem.IMU — IMU sensor origin.

### 3. Custom transformation matrix

Apply an arbitrary 4×4 transformation matrix representing T_ref_to_custom — the transform from the reference camera to your custom
coordinate system. The node composes it with the frame extrinsics to get the final applied transform:

```python
# 90° rotation around Z axis
transform = [
    [0.0, -1.0, 0.0, 0.0],
    [1.0,  0.0, 0.0, 0.0],
    [0.0,  0.0, 1.0, 0.0],
    [0.0,  0.0, 0.0, 1.0],
]
pc.initialConfig.setTransformationMatrix(transform)
```

If no coordinate system target is set and the transformation matrix is identity, only the frame extrinsics (T_frame_to_ref) are
applied, bringing points into the reference camera frame.

### Interaction between options

The three modes are mutually exclusive. Setting a camera socket or housing coordinate system sets the coordinate system type to
CAMERA_SOCKET or HOUSING respectively — in those modes the custom transformation matrix from setTransformationMatrix is ignored.
The custom matrix is only used when the coordinate system type is DEFAULT (i.e. neither setTargetCoordinateSystem overload was
called).

All three methods can also be called on the node directly (forwarded to initialConfig):

```python
pc.setTargetCoordinateSystem(dai.CameraBoardSocket.CAM_A)
# or
pc.setTargetCoordinateSystem(dai.HousingCoordinateSystem.VESA_A)
```

## Usage

### Point cloud from stereo depth

Generate a basic point cloud from stereo depth.

#### Python

```python
import depthai as dai

pipeline = dai.Pipeline()

left = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
right = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
stereo = pipeline.create(dai.node.StereoDepth)
left.requestOutput((640, 400)).link(stereo.left)
right.requestOutput((640, 400)).link(stereo.right)

pc = pipeline.create(dai.node.PointCloud)
pc.initialConfig.setLengthUnit(dai.LengthUnit.METER)
stereo.depth.link(pc.inputDepth)

q = pc.outputPointCloud.createOutputQueue(maxSize=4, blocking=False)

with pipeline:
    pipeline.start()
    while pipeline.isRunning():
        pclData = q.get()
        points = pclData.getPoints()  # np.ndarray (N, 3) float32
        print(f"Points: {len(points)}, Z=[{pclData.getMinZ():.2f}, {pclData.getMaxZ():.2f}]")
```

#### C++

```cpp
#include <iostream>
#include "depthai/depthai.hpp"

int main() {
    dai::Pipeline pipeline;

    auto left = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_B);
    auto right = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_C);
    auto stereo = pipeline.create<dai::node::StereoDepth>();
    left->requestOutput(std::make_pair(640, 400))->link(stereo->left);
    right->requestOutput(std::make_pair(640, 400))->link(stereo->right);

    auto pc = pipeline.create<dai::node::PointCloud>();
    pc->initialConfig->setLengthUnit(dai::LengthUnit::METER);
    stereo->depth.link(pc->inputDepth);

    auto q = pc->outputPointCloud.createOutputQueue(4, false);

    pipeline.start();
    while(pipeline.isRunning()) {
        auto pclData = q->get<dai::PointCloudData>();
        auto points = pclData->getPoints();
        std::cout << "Points: " << points.size()
                  << ", Z=[" << pclData->getMinZ() << ", " << pclData->getMaxZ() << "]" << std::endl;
    }
    pipeline.stop();
    return 0;
}
```

### Colorized point cloud

Link depth and color images aligned to the same coordinate system. Typically the depth is aligned to the color camera.

#### Python

```python
import depthai as dai

pipeline = dai.Pipeline()

left = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
right = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
color = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)

stereo = pipeline.create(dai.node.StereoDepth)
left.requestFullResolutionOutput().link(stereo.left)
right.requestFullResolutionOutput().link(stereo.right)

colorOut = color.requestOutput((640, 400), type=dai.ImgFrame.Type.RGB888i,
                               resizeMode=dai.ImgResizeMode.CROP, enableUndistortion=True)

pc = pipeline.create(dai.node.PointCloud)
pc.initialConfig.setLengthUnit(dai.LengthUnit.METER)

# Align depth to the color camera
platform = pipeline.getDefaultDevice().getPlatform()
if platform == dai.Platform.RVC4:
    imageAlign = pipeline.create(dai.node.ImageAlign)
    stereo.depth.link(imageAlign.input)
    colorOut.link(imageAlign.inputAlignTo)
    imageAlign.outputAligned.link(pc.inputDepth)
else:
    colorOut.link(stereo.inputAlignTo)
    stereo.depth.link(pc.inputDepth)

colorOut.link(pc.inputColor)

q = pc.outputPointCloud.createOutputQueue(maxSize=4, blocking=False)

with pipeline:
    pipeline.start()
    while pipeline.isRunning():
        pcd = q.get()
        if pcd.isColor():
            xyz, rgba = pcd.getPointsRGB()
            print(f"Points: {len(xyz)}, color=yes, Z=[{pcd.getMinZ():.2f}, {pcd.getMaxZ():.2f}]")
```

#### C++

```cpp
#include <iostream>
#include "depthai/depthai.hpp"

int main() {
    dai::Pipeline pipeline;

    auto left = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_B);
    auto right = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_C);
    auto color = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_A);

    auto stereo = pipeline.create<dai::node::StereoDepth>();
    left->requestFullResolutionOutput()->link(stereo->left);
    right->requestFullResolutionOutput()->link(stereo->right);

    auto colorOut = color->requestOutput(std::make_pair(640, 400), dai::ImgFrame::Type::RGB888i,
                                         dai::ImgResizeMode::CROP, std::nullopt, true);

    auto pc = pipeline.create<dai::node::PointCloud>();
    pc->initialConfig->setLengthUnit(dai::LengthUnit::METER);

    auto platform = pipeline.getDefaultDevice()->getPlatform();
    if(platform == dai::Platform::RVC4) {
        auto imageAlign = pipeline.create<dai::node::ImageAlign>();
        stereo->depth.link(imageAlign->input);
        colorOut->link(imageAlign->inputAlignTo);
        imageAlign->outputAligned.link(pc->inputDepth);
    } else {
        colorOut->link(stereo->inputAlignTo);
        stereo->depth.link(pc->inputDepth);
    }

    colorOut->link(pc->getColorInput());

    auto q = pc->outputPointCloud.createOutputQueue(4, false);

    pipeline.start();
    while(pipeline.isRunning()) {
        auto pcd = q->get<dai::PointCloudData>();
        if(pcd->isColor()) {
            auto points = pcd->getPointsRGB();
            std::cout << "Points: " << points.size() << ", color=yes"
                      << ", Z=[" << pcd->getMinZ() << ", " << pcd->getMaxZ() << "]" << std::endl;
        }
    }
    pipeline.stop();
    return 0;
}
```

## Examples

 * [PointCloud](https://docs.luxonis.com/software-v3/depthai/examples/pointcloud/point_cloud.md) — Minimal colorized point cloud
   example.
 * [PointCloud Visualizer](https://docs.luxonis.com/software-v3/depthai/examples/pointcloud/point_cloud_visualizer.md) — Live 3D
   visualization with Open3D.
 * [PointCloud Showcase](https://docs.luxonis.com/software-v3/depthai/examples/pointcloud/point_cloud_showcase.md) — Demonstrates
   filtered, organized, coordinate transforms, custom matrix, and colorized modes.

## Reference

### dai::node::PointCloud

Kind: class

PointCloud node. Computes point cloud from depth frames.

#### dai::node::PointCloud::Impl

Kind: class

##### LengthUnit targetLengthUnit

Kind: variable

##### Impl()

Kind: function

##### void setLogger(std::shared_ptr<::spdlog::logger > log)

Kind: function

##### void computePointCloudDense(const uint8_t * depthData, std::vector< Point3f > & points)

Kind: function

##### void computePointCloudDenseColored(const uint8_t * depthData, const uint8_t * colorData, std::vector< Point3fRGBA > &
points)

Kind: function

##### void applyTransformation(std::vector< PointT > & points)

Kind: function

##### std::vector< PointT > filterValidPoints(const std::vector< PointT > & densePoints)

Kind: function

##### void setLengthUnit(dai::LengthUnit lengthUnit)

Kind: function

##### void useCPU()

Kind: function

##### void useCPUMT(uint32_t numThreads)

Kind: function

##### void useGPU(uint32_t device)

Kind: function

##### void setIntrinsics(float fx, float fy, float cx, float cy, unsigned int width, unsigned int height)

Kind: function

##### void setExtrinsics(const std::vector< std::vector< float >> & transformMatrix)

Kind: function

##### void clearExtrinsics()

Kind: function

#### std::shared_ptr< PointCloudConfig > initialConfig

Kind: variable

Initial config to use when computing the point cloud.

#### Input inputConfig

Kind: variable

Input PointCloudConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

#### Subnode < node::Sync > sync

Kind: variable

Sync subnode for synchronized depth + color input. When only depth is connected, Sync passes through single-item MessageGroups.
When both depth and color are connected, Sync pairs them by timestamp.

#### InputMap & syncInputs

Kind: variable

#### Input & inputDepth

Kind: variable

Input message with depth data used to create the point cloud. Routed through the internal Sync subnode.

#### Output outputPointCloud

Kind: variable

Outputs PointCloudData message

#### Output passthroughDepth

Kind: variable

Passthrough depth from which the point cloud was calculated. Suitable for when input queue is set to non-blocking behavior.

#### PointCloud()

Kind: function

#### ~PointCloud()

Kind: function

#### Input & getColorInput()

Kind: function

Get the optional color input for colorized point clouds. Lazily creates the Sync entry so that depth-only mode works without Sync
waiting for a color frame that never arrives. Link an aligned color image (RGB888i, same dimensions as depth) to this input to
enable colored point cloud output.

#### void setNumFramesPool(int numFramesPool)

Kind: function

Specify number of frames in pool. parameters: numFramesPool: How many frames should the pool have

#### void setRunOnHost(bool runOnHost)

Kind: function

Specify whether to run on host or device By default, the node will run on host.

#### void useCPU()

Kind: function

Use single-threaded CPU for processing

#### void useCPUMT(uint32_t numThreads)

Kind: function

Use multi-threaded CPU for processing

#### void useGPU(uint32_t device)

Kind: function

Use GPU for point cloud computation parameters: device: GPU device index (default 0)

#### void setTargetCoordinateSystem(CameraBoardSocket targetCamera, bool useSpecTranslation)

Kind: function

Set target coordinate system to transform point cloud parameters: targetCamera: Target camera socket; useSpecTranslation: Use spec
translation instead of calibration (default: false)

#### void setTargetCoordinateSystem(HousingCoordinateSystem housingCS, bool useSpecTranslation)

Kind: function

Set target coordinate system to housing coordinate system Point cloud will be transformed to this housing coordinate system
parameters: housingCS: Target housing coordinate system; useSpecTranslation: Whether to use spec translation (default: true)

#### bool runOnHost()

Kind: function

Returns true or false whether the node should be run on host or not.

#### void buildInternal()

Kind: function

Function called from within the

### Need assistance?

Head over to [Discussion Forum](https://discuss.luxonis.com/) for technical support or any other questions you might have.
