# ROS Bridge

## Introduction

DepthAI ROS Bridge (depthai_bridge package) is a ROS package that allows creating pipelines in a more controlled way using C++
DepthAI API and ROS message converters.

## Available Converters

| **Converter** | **Description** |
| --- | --- |
| Disparity converter | Converts stereo disparity messages |
| Image converter | Used to convert various types of Image messages to/from DAI/OpenCV formats |
| IMG Detections converter | Converts 2D Detections |
| IMU converter | Converts IMU data, has the possibility to interpolate IMU readings |
| Spatial Detections converter | Converts 3D Spatial detections |
| Tracked Features converter | Converts Tracked Features |
| Track Detections converter | Converts DAI Tracklets |
| TF Publisher | It is used to convert DAI calibration data and publish it as ROS Transform data |

## Usage

You can refer to depthai_examples package on how to set up pipeline and use BridgePublisher class and ImageConverters to publish
data in ROS format. Below is a simple example for RGB publishing

```cpp
#include <cstdio>

#include "depthai/device/Device.hpp"
#include "depthai/pipeline/Pipeline.hpp"
#include "depthai/pipeline/node/Camera.hpp"
#include "depthai_bridge/BridgePublisher.hpp"
#include "depthai_bridge/ImageConverter.hpp"
#include "depthai_bridge/TFPublisher.hpp"
#include "depthai_bridge/depthaiUtility.hpp"
#include "rclcpp/node.hpp"

int main(int argc, char** argv) {
    int width = 1280;
    int height = 720;
    std::string tfPrefix = "oak";
    rclcpp::init(argc, argv);
    auto node = rclcpp::Node::make_shared("rgb_publisher");

    auto device = std::make_shared<dai::Device>();
    dai::Pipeline pipeline(device);

    // Define sources and outputs
    auto rgbCamera = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_A);

    // Create output queue
    auto rgbOutputQueue = rgbCamera->requestOutput({width, height})->createOutputQueue(8, false);

    pipeline.start();

    // Create a bridge publisher for RGB images
    auto rgbConverter = std::make_shared<depthai_bridge::ImageConverter>(
        depthai_bridge::getOpticalFrameName(tfPrefix, depthai_bridge::getSocketName(dai::CameraBoardSocket::CAM_A, device->getDeviceName())), false);

    auto calibrationHandler = device->readCalibration();
    auto tfPub =
        std::make_unique<depthai_bridge::TFPublisher>(node, calibrationHandler, device->getConnectedCameraFeatures(), tfPrefix, device->getDeviceName());
    auto rgbCameraInfo = rgbConverter->calibrationToCameraInfo(calibrationHandler, dai::CameraBoardSocket::CAM_A, width, height);

    auto rgbPub = std::make_unique<depthai_bridge::BridgePublisher<sensor_msgs::msg::Image, dai::ImgFrame>>(
        rgbOutputQueue,
        node,
        "rgb/image",
        [rgbConverter](std::shared_ptr<dai::ImgFrame> msg, std::deque<sensor_msgs::msg::Image>& rosMsgs) { rgbConverter->toRosMsg(msg, rosMsgs); },
        30,
        rgbCameraInfo,
        "rgb");

    rgbPub->addPublisherCallback();

    while(rclcpp::ok() && pipeline.isRunning()) {
        rclcpp::spin(node);
    }

    return 0;
}
```
