DepthAI’s Documentation¶
Just received a device? Select the device you have below to get started:
DepthAI is a Spatial AI platform that is used for communication with and development of our devices; OAK cameras and RAE robots.
It allows you to develop projects and products that require:
Performant (high resolution and FPS, multiple sensors)
Embedded, low power solution
Best of all, it is modular and you can integrate this technology into your products.
DepthAI Viewer¶
DepthAI Viewer is the visualization tool for DepthAI and OAK cameras. It’s a GUI application that will run a demo app by default, which will visualize all streams and run inference on the device. It also allows you to change the configuration of the device. DepthAI viewer works for USB and POE cameras.
To install and run the DepthAI Viewer, run the following commands in the terminal:
python3 -m pip install depthai-viewer
# Run the DepthAI Viewer
python3 -m depthai_viewer
We have also prepared a step by step guide here with detailed instructions how to set up your DepthAI and run this script.
Example Use Cases¶
In this section, you’ll find an inspiration what can you build right away with DepthAI.
Pose Estimation
This example shows how to run Google Mediapipe single body pose tracking models
Created by our contributor - Geaxgx
Hand Tracking
This example shows how to run Google Mediapipe hand tracking models
Created by our contributor - Geaxgx
Head posture detection
This example demonstrates 2-stage pipeline with depthai - face detection NN and head pose estimation NN
Sign Language Recognition
This example demonstrates how to recognize American Sign Language (ASL) on DepthAI using hand landmarks
Example by Cortic Technology
Gaze Estimation
This example demonstrates how to run 3 stage inference (3-series, 2-parallel) on DepthAI using Gen2 Pipeline Builder.
Original OpenVINO demo, on which this example was made, is here.
Fatigue Detection
This example demonstrates the Gen2 Pipeline Builder running face detection network and head detection network
This example was created by our partner - ArduCam
Face Recognition
This example demonstrates the Gen2 Pipeline Builder running face detection network, head posture estimation network and face recognition network
This example was created by our partner - ArduCam
COVID-19 mask detection
This experiment allows you to run the COVID-19 mask/no-mask object detector which was trained via
the Google Colab tutorial here.
Pedestrian reidentification
This example demonstrates how to run 2 stage inference on DepthAI using Gen2 Pipeline Builder.
Original OpenVINO demo, on which this example was made, is here.
Camera Demo
This example shows how to use the DepthAI/megaAI/OAK cameras in the Gen2 Pipeline Builder over USB.
Fire detection
This example demonstrates the Gen2 Pipeline Builder running fire detection network
This example was created by our partner - ArduCam
Tools & API Examples¶
In this section, you’ll see examples of various API usage permutations, to show what the API is capable of or to solve some meta problem, like how to stream the data, how to collect it and alike.
This pipeline implements text detection (EAST) followed by optical character recognition of the detected text |
|
This example shows how you can use multiple DepthAI’s on a single host. The demo will find all devices connected to the host and display an RGB preview from each of them |
|
Detects all faces in the frame, gets face feature vectors and compares it with database to perform face recognition |
|
This example shows how to sync messages (eg. NN results with frames) with software, based on either timestamps or sequence numbers |
|
Detects license plates and performs license plate recognition operation on the camera itself |
|
This example demonstrates how to do host-side WLS filtering using the rectified_right and depth stream from DepthAI API |
|
QR Code detection model running on the device combined with on-host QR code decoder |
|
Deploy over 10,000 pre-trained AI models from Roboflow Universe and your own Roboflow custom models |
Ecosystem¶
Here you’ll find Python bindings creating the Python API of DepthAI |
|
Our core API written in C++ |
|
DepthAI ROS Wrapper. This is an attempt at basic DepthAI to ROS2 interface. It’s largely leveraging the existing depthai-python examples. |
|
DepthAI Unity Wrapper projects and examples. Useful for synthetic dataset generation. |
|
This repository contains Luxonis open sourced baseboards, and contains Altium design files, documentation, and pictures to help you understand more about the embedded hardware that powers DepthAI. |
|
Here you can find repositories to help you connect your NN and create BLOBs. |
In this repository, you’ll find various experiments using DepthAI. You can use those examples as a basis or a reference in your application. |
|
This repo contains a demo application, which can load different networks, create pipelines, record video, etc. This program includes an example of depth & CNN inference and ready to use models. |
|
CMake example project which serves as a template on how to quickly get started with C++ and depthai library |
|
This repo contains source code for tutorials published on docs.luxonis.com. |
|
This repository contains a Dockerfile, that allows you to run OpenVINO on DepthAI inside a Docker container. |