DepthAI’s Documentation¶
Just received a device? Select the device you have below to get started:
DepthAI is a Spatial AI platform that is used for communication with and development of our devices; OAK cameras and RAE robots.
It allows you to develop projects and products that require:
Performant (high resolution and FPS, multiple sensors)
Embedded, low power solution
Best of all, it is modular and you can integrate this technology into your products.
Demo Script¶

Demo script is our multipurpose command line demo tool, built around pipeline builder API, that allows you to check DepthAI features straight from the command line - no coding required! It works USB and POE equally, automatically discovering any POE DepthAI on your LAN and/or USB DepthAI connected to your computer. If multiple are connected, it will prompt you on which to use for the demo.
To install and run the demo script on your OAK camera, type the following commands in the terminal.
git clone https://github.com/luxonis/depthai.git
cd depthai
python3 install_requirements.py
python3 depthai_demo.py
And then following up with the README.md for more usage examples. We have also prepared a step by step guide here with detailed instructions how to set up your DepthAI and run this script.
If you have issues during the installation, see our Installation page for additional OS-specific instructions
Example Use Cases¶
In this section, you’ll find an inspiration what can you build right away with DepthAI.

Pose Estimation
This example shows how to run Google Mediapipe single body pose tracking models
Created by our contributor - Geaxgx

Hand Tracking
This example shows how to run Google Mediapipe hand tracking models
Created by our contributor - Geaxgx

Head posture detection
This example demonstrates 2-stage pipeline with depthai - face detection NN and head pose estimation NN

Sign Language Recognition
This example demonstrates how to recognize American Sign Language (ASL) on DepthAI using hand landmarks
Example by Cortic Technology
Gaze Estimation
This example demonstrates how to run 3 stage inference (3-series, 2-parallel) on DepthAI using Gen2 Pipeline Builder.
Original OpenVINO demo, on which this example was made, is here.

Fatigue Detection
This example demonstrates the Gen2 Pipeline Builder running face detection network and head detection network
This example was created by our partner - ArduCam

Face Recognition
This example demonstrates the Gen2 Pipeline Builder running face detection network, head posture estimation network and face recognition network
This example was created by our partner - ArduCam
COVID-19 mask detection
This experiment allows you to run the COVID-19 mask/no-mask object detector which was trained via
the Google Colab tutorial here.
Pedestrian reidentification
This example demonstrates how to run 2 stage inference on DepthAI using Gen2 Pipeline Builder.
Original OpenVINO demo, on which this example was made, is here.

Camera Demo
This example shows how to use the DepthAI/megaAI/OAK cameras in the Gen2 Pipeline Builder over USB.

Fire detection
This example demonstrates the Gen2 Pipeline Builder running fire detection network
This example was created by our partner - ArduCam
Tools & API Examples¶
In this section, you’ll see examples of various API usage permutations, to show what the API is capable of or to solve some meta problem, like how to stream the data, how to collect it and alike.
This pipeline implements text detection (EAST) followed by optical character recognition of the detected text |
|
This example shows how you can use multiple DepthAI’s on a single host. The demo will find all devices connected to the host and display an RGB preview from each of them |
|
Detects all faces in the frame, gets face feature vectors and compares it with database to perform face recognition |
|
This example shows how to sync messages (eg. NN results with frames) with software, based on either timestamps or sequence numbers |
|
Detects license plates and performs license plate recognition operation on the camera itself |
|
This example demonstrates how to do host-side WLS filtering using the rectified_right and depth stream from DepthAI API |
|
QR Code detection model running on the device combined with on-host QR code decoder |
|
Deploy over 10,000 pre-trained AI models from Roboflow Universe and your own Roboflow custom models |
Ecosystem¶
Here you’ll find Python bindings creating the Python API of DepthAI |
|
Our core API written in C++ |
|
DepthAI ROS Wrapper. This is an attempt at basic DepthAI to ROS2 interface. It’s largely leveraging the existing depthai-python examples. |
|
DepthAI Unity Wrapper projects and examples. Useful for synthetic dataset generation. |
|
This repository contains Luxonis open sourced baseboards, and contains Altium design files, documentation, and pictures to help you understand more about the embedded hardware that powers DepthAI. |
|
Here you can find repositories to help you connect your NN and create BLOBs. |
In this repository, you’ll find various experiments using DepthAI. You can use those examples as a basis or a reference in your application. |
|
This repo contains a demo application, which can load different networks, create pipelines, record video, etc. This program includes an example of depth & CNN inference and ready to use models. |
|
CMake example project which serves as a template on how to quickly get started with C++ and depthai library |
|
This repo contains source code for tutorials published on docs.luxonis.com. |
|
This repository contains a Dockerfile, that allows you to run OpenVINO on DepthAI inside a Docker container. |