# ImageAlign

ImageAlign node is used for aligning sensor frames, like depth map (from ToF/Stereo) to color frame (RGB-D), or to align
temperature/thermal frame to color frame (RGB-Thermal) when specifying depth plane.

## How to place it

#### Python

```python
with dai.Pipeline() as pipeline:
    img_align = pipeline.create(dai.node.ImageAlign)
```

#### C++

```cpp
dai::Pipeline pipeline;
auto align = pipeline.create<dai::node::ImageAlign>();
```

## Inputs and Outputs

## How it works

ImageAlign aligns input frame and aligns it to inputAlignTo frame. The inputAlignTo is only read once to extract the frame
metadata, then the input frame is reprojected and aligned to it based on the extrinsic calibration between the two sensors.

## Usage

#### Python

```python
with dai.Pipeline() as pipeline:
    rgb_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
    video_stream = rgb_cam.requestOutput(size=(1280, 960))

    # Create left/right stereo cameras & StereoDepth node
    left = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
    right = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
    stereo = pipeline.create(dai.node.StereoDepth)

    left.requestOutput(size=(640, 400)).link(stereo.left)
    right.requestOutput(size=(640, 400)).link(stereo.right)

    # Now create ImageAlign node and align stereo depth to RGB stream
    img_align = pipeline.create(dai.node.ImageAlign)
    stereo.depth.link(align.input)
    video_stream.link(align.inputAlignTo)

    # If you want to have synchronized aligned depth and color stream, you'd want to use Sync node
    sync = pipeline.create(dai.node.Sync)
    img_align.outputAligned.link(sync.inputs['aligned_depth'])
    video_stream.link(sync.inputs["color"])

    # Create an output queue, so you can get sycned frames to the host computer
    queue = sync.out.createOutputQueue()

    # ...
```

#### C++

```cpp
dai::Pipeline pipeline;

// Create RGB Camera
auto rgbCam = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_A);
auto videoStream = rgbCam->requestOutput({1280, 960});

// Create left and right stereo cameras
auto left = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_B);
auto right = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_C);
auto stereo = pipeline.create<dai::node::StereoDepth>();

left->requestOutput({640, 400})->link(stereo->left);
right->requestOutput({640, 400})->link(stereo->right);

// Create ImageAlign node
auto imgAlign = pipeline.create<dai::node::ImageAlign>();
stereo->depth.link(imgAlign->input);
videoStream.link(imgAlign->inputAlignTo);

// Create Sync node
auto sync = pipeline.create<dai::node::Sync>();
imgAlign->outputAligned.link(sync->inputs["aligned_depth"]);
videoStream.link(sync->inputs["color"]);

// Create output queue
auto queue = sync->out.createOutputQueue();
// ...
```

## Examples of functionality

 * [Depth alignment](https://docs.luxonis.com/software-v3/depthai/examples/image_align/depth_align.md) - Align and blend RGB and
   depth frames.

## Reference

### dai::node::ImageAlign

Kind: class

ImageAlign node. Calculates spatial location data on a set of ROIs on depth map.

#### std::shared_ptr< ImageAlignConfig > initialConfig

Kind: variable

Initial config to use when calculating spatial location data.

#### Input inputConfig

Kind: variable

Input message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

#### Input input

Kind: variable

Input message. Default queue is non-blocking with size 4.

#### Input inputAlignTo

Kind: variable

Input align to message. Default queue is non-blocking with size 1.

#### Output outputAligned

Kind: variable

Outputs ImgFrame message that is aligned to inputAlignTo.

#### Output passthroughInput

Kind: variable

Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.

#### ImageAlign & setOutputSize(int alignWidth, int alignHeight)

Kind: function

Specify the output size of the aligned image

#### ImageAlign & setOutKeepAspectRatio(bool keep)

Kind: function

Specify whether to keep aspect ratio when resizing

#### ImageAlign & setInterpolation(Interpolation interp)

Kind: function

Specify interpolation method to use when resizing

#### ImageAlign & setNumShaves(int numShaves)

Kind: function

Specify number of shaves to use for this node

#### ImageAlign & setNumFramesPool(int numFramesPool)

Kind: function

Specify number of frames in the pool

#### void setRunOnHost(bool runOnHost)

Kind: function

Specify whether to run on host or device By default, the node will run on device.

#### bool runOnHost()

Kind: function

Check if the node is set to run on host

#### void run()

Kind: function

#### DeviceNodeCRTP()

Kind: function

#### DeviceNodeCRTP(const std::shared_ptr< Device > & device)

Kind: function

#### DeviceNodeCRTP(std::unique_ptr< Properties > props)

Kind: function

#### DeviceNodeCRTP(std::unique_ptr< Properties > props, bool confMode)

Kind: function

#### DeviceNodeCRTP(const std::shared_ptr< Device > & device, std::unique_ptr< Properties > props, bool confMode)

Kind: function

### Need assistance?

Head over to [Discussion Forum](https://discuss.luxonis.com/) for technical support or any other questions you might have.
