# Build from Source

## Install dependencies

This tutorial assumes ROS2 Humble, Jazzy, Kilted, or newer is installed on your system. Follow the official ROS installation
instructions for your distro first.

```bash
apt-get update \
   && apt-get -y install --no-install-recommends software-properties-common git libusb-1.0-0-dev wget zsh python3-colcon-common-extensions zip unzip tar
```

> **Version note**
> If you are building on
> **Humble**
> or
> **Jazzy**
> , the ROS v3 packages use the
> `_v3`
> suffix (for example
> `depthai_ros_driver_v3`
> ). From
> **Kilted**
> onwards, v3 is the default and the package names are unsuffixed.

if you don't have rosdep installed and not initialized please execute the following steps:

```bash
sudo apt install python3-rosdep
sudo rosdep init
rosdep update
```

## Setting up procedure

The following setup procedure assumes you have cmake version >= 3.10.2 and OpenCV version >= 4.0.0. We selected dai_ws as the name
for a new folder, as it will be our depthai ros workspace.

```bash
mkdir -p dai_ws/src
cd dai_ws/src
git clone https://github.com/luxonis/depthai-core.git
cd depthai-core && git submodule update --init --recursive && cd ..
git clone https://github.com/luxonis/depthai-ros.git

# checkout the release branch matching your target ROS distro
# Humble: depthai-core -> v3-humble, depthai-ros -> v3-humble
# Jazzy:  depthai-core -> v3-jazzy,  depthai-ros -> v3-jazzy
# Kilted: depthai-core -> main,      depthai-ros -> kilted

cd ..
rosdep install --from-paths src --ignore-src -r -y
source /opt/ros/$ROS_DISTRO/setup.bash
MAKEFLAGS="-j1 -l1" colcon build
source install/setup.bash
```

> If you are using a lower end PC or RPi, standard building may take a lot of RAM and clog your PC. To avoid that, you can use
> `build.sh`
> command from your workspace (it just wraps colcon commands):
> ```bash
> ./src/depthai-ros/build.sh
> ```

# Docker

You can additionally build and run docker images on your local machine. To do that, add USB rules as in above step.

## Building

Clone the repository and inside it run (it matters on which branch you are on):

```bash
docker build --build-arg USE_RVIZ=1 -t depthai-ros .
```

If you find out that you run out of RAM during building, you can also set BUILD_SEQUENTIAL=1 to build packages one at a time, it
should take longer, but use less RAM.

RUN_RVIZ arg means rviz will be installed inside docker. If you want to run it you need to also execute following command (you'll
have to do it again after restarting your PC):

```bash
xhost +local:docker
```

Then you can run your image in following way:

```bash
docker run -it -v /dev/:/dev/ --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix depthai-ros
```

will run an interactive docker session. You can also try:

```bash
docker run -it -v /dev/:/dev/  --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix depthai-ros [CMD]
```

Where [CMD] is what's going to be executed when container is initially run and could be for example:

 * bash (it will allow you to browse files and make modifications to the code and rebuild it)
 * zsh (other shell that is installed that has some advantages over bash)
 * roslaunch depthai_ros_driver camera.launch (this is just an example, any launch file will work here) A side note here, launch
   files in depthai_ros_driver have some default parameters set by .yaml files inside the driver. You can either edit them inside
   the container itself, or you can make a .yaml file on your host (let's say /home/YOUR_USERNAME_HERE/params/example_config.yaml)
   and pass it as an argument to the executable, as follows:

```bash
docker run -it -v /dev/:/dev/ -v /home/YOUR_USERNAME_HERE/params:/params --network host --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix depthai-ros ros2 launch depthai_ros_driver driver.launch.py params_file:=/params/example_config.yaml
```

Note, to make images more compact only some external dependencies are installed, for example if you want to try out RTABMap
example in docker, you have to:

 * Install it by running the container in bash/zsh mode
 * Modify the Dockerfile so that it's installed during building - you'll have to rebuild the container after that.
 * Run base camera node in our container and RTABMap separately on host/ in a separate container (see the launch file on what
   parameters/topic names need to be changed when running separately).
