# VIO and SLAM

OAK cameras can also be used for localization ([VIO](https://en.wikipedia.org/wiki/Visual_odometry)) and
[SLAM](https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping) (Simultaneous Localization And Mapping).

 * [RVC2](https://docs.luxonis.com/hardware/platform/rvc/rvc2.md): compute - especially CPU performance - on RVC2 is quite
   limited, so for VIO/SLAM you'd need to run the algorithm on the host computer.
 * [RVC3](https://docs.luxonis.com/hardware/platform/rvc/rvc3.md): as it has Quad-core ARM, you can run VIO/SLAM algorithms on the
   camera itself. See [RAE on-device VIO & SLAM](#VIO%2520and%2520SLAM-RAE%2520on-device%2520VIO%2520%2526%2520SLAM).

Several SLAM and localization projects that support OAK cameras:

 * [ORB SLAM3](https://qiita.com/nindanaoto/items/20858eca08aad90b5bab) with an OAK-D and ROS1 by @nimda
 * [RTAB-Map](https://github.com/introlab/rtabmap) recently ([PR here](https://github.com/introlab/rtabmap/pull/696)) added
   support for depthai and OAK cameras (via ROS)
 * [SpectacularAI's SLAM](https://twitter.com/oseiskar/status/1536344550305763328?s=20&t=YY432W59nsZd6_IhhfBW4A) with OAK-D - Free
   for non-commercial use
 * [ArduCam Visual SLAM tutorial](https://www.arducam.com/docs/opencv-ai-kit-oak/performing-location-with-visual-slam/)
 * [DepthAI-SLAM](https://github.com/bharath5673/depthai-slam)
 * [Drone on-device NN-based localization](#VIO%2520and%2520SLAM-Drone%2520on-device%2520NN-based%2520localization)

## On-device SuperPoint for localization and SLAM

A customer shared with us a solution that was running the SuperPoint ([Github repo](https://github.com/rpautrat/SuperPoint),
[Arxiv paper](https://arxiv.org/abs/1712.07629)) feature extraction NN on-device (on
[RVC2-based](https://docs.luxonis.com/hardware/platform/rvc/rvc2.md) OAK-D) and then used the features for localization and SLAM
(on the host computer).

## RAE on-device VIO & SLAM

The demo below shows how you can run on-device VIO & SLAM on the using [Spectacular AI](https://www.spectacularai.com/) SDK.
Disparity matching and feature extraction + tracking are done on accelerated blocks on the
[RVC3](https://docs.luxonis.com/hardware/platform/rvc/rvc3.md). Features are then combined with disparity to provide tracked 3D
points used by VO, and Spectacular AI SDK fuses that with IMU data to provide accurate localization of the robot.

## Syncing frames and IMU messages

For VIO/SLAM solutions, you would want to sync IMU messages with the middle of the exposure. For exposure timings and timestamps,
see [Frame capture graphs](https://docs.luxonis.com/hardware/platform/deploy/frame-sync.md) for details. [See
here](https://github.com/luxonis/oak-examples/tree/master/gen2-syncing#imu--rgb--depth-timestamp-syncing) for IMU/frame syncing
demo.

Some more advance algorithms weight multiple IMU messages (before/after exposure) and interpolate the final value.

## Drone on-device NN-based localization

A novel AI approach for spatial localization using an OAK camera. The drone runs on-device neural inferencing to provide
positioning of the drone. [Paper here](https://link.springer.com/article/10.1007/s11554-023-01259-x), [Github repo
here](https://github.com/QuetzalCpp/DeepPilot4Pose).
