RealSense™ comparison

TL;DR: Compared to RealSense™ stereo cameras, the DepthAI platform adds a ton of on-device features (custom AI modes, tracking, scripting, encoding etc.) to OAK cameras and can be used in embedded applications (Embedded use-case).

Depth comparison

We haven’t done any (quantitative) tests ourselves, but a third party (a customer) sent us their OAK evaluation results, comparing OAK-D Pro with RealSense™ D435i.

https://user-images.githubusercontent.com/18037362/184259447-8984e083-05fe-4130-86ef-bfa5ec8bf112.png

Laser dot projector disabled (passive stereo)

https://user-images.githubusercontent.com/18037362/184259825-e19f0631-4325-424a-8d39-2060923b31fe.png

Laser dot projector enabled (active stereo)

https://user-images.githubusercontent.com/18037362/184262380-9ad26b2f-0b31-439d-b887-5c89a2ad67bb.png

Target (color image). Table and wall are featureless surfaces.

Custom stereo depth perception

Our platform was built from the ground up with the idea of being customizable. All of our products based on OAK SoM are open-source so you can easily redesign the board (see Integrating DepthAI into products), for example to change the stereo baseline distance or use a different image sensor (we support a bunch of different sensors).

OAK FFC line is great for prototyping, as it allows users to use different camera sensors/optics and place them at an ideal stereo baseline distance for their application.

Below is a long-range disparity depth visualized over a color frame. This customer used narrow FOV M12 lenses with wide stereo baseline distance (25cm) to achieve such results with our platform.

https://user-images.githubusercontent.com/18037362/184261853-7d447b7d-1ff9-4c8d-871c-eb230591eae1.png

See stereo depth documentation on max depth perception calculations based on camera intrinsics/baseline distance.

On-device feature comparison

On-Device Capabilities

OAK-D Pro

OAK-D

OAK-D-Lite

L515

D415

D430-D435

D450-D455

F455

F450

T261-T265

Custom AI models

✔️

✔️

✔️

Object detection

✔️

✔️

✔️

Object tracking

✔️

✔️

✔️

On-device scripting

✔️

✔️

✔️

Video/Image Encoding

✔️

✔️

✔️

Image Manipulation

✔️

✔️

✔️

Skeleton/Hand Tracking

✔️

✔️

✔️

✔️

✔️

✔️

3D Semantic Segmentation

✔️

✔️

✔️

3D Object Pose Estimation

✔️

✔️

✔️

3D Edge Detection

✔️

✔️

✔️

Feature Tracking

✔️

✔️

✔️

✔️

3D Feature Tracking

✔️

✔️

✔️

OCR

✔️

✔️

✔️

Face Recognition

✔️

✔️

✔️

✔️

✔️

Encryption

✔️

✔️

Features described

  • Custom AI models - You can run any AI/NN model(s) on the device, as long as all layers are supported. You can also choose from 200+ pretrained NN models from Open Model Zoo and DepthAI Model Zoo.

  • Object detection - Most popular object detectors have been converted and run on our devices. DepthAI supports onboard decoding of Yolo and MobileNet based NN models.

  • Object tracking - ObjectTracker node comes with 4 tracker types, and it also supports tracking of objects in 3D space.

  • On-device scripting - Script node enables users to run custom Python 3.9 scripts that will run on the device, used for managing the flow of the pipeline (business logic).

  • Video/Image encoding - VideoEncoder node allows encoding into MJPEG, H265, or H264 formats.

  • Image Manipulation - ImageManip node allows users to resize, warp, crop, flip, and thumbnail image frames and do type conversions (YUV420, NV12, RGB, etc.)

  • Skeleton/Hand Tracking - Detect and track key points of a hand or human pose. Geaxgx’s demos: Hand tracker, Blazepose, Movenet.

  • 3D Semantic segmentation - Perceive the world with semantically-labeled pixels. DeeplabV3 demo here.

  • 3D Object Pose Estimation - MediaPipe’s Objectron has been converted to run on OAK cameras. Video here.

  • 3D Edge Detection - EdgeDetector node uses Sobel filter to detect edges. With depth information, you can get physical position of these edges.

  • Feature Tracking - FeatureTracker node detects and tracks key points (features).

  • 3D Feature Tracking - With depth information, you can track these features in physical space.

  • OCR - Optical character recognition, demo here.

  • Face Recognition - Demo here, which runs face detection, alignment, and face recognition (3 different NN models) on the device simultaneously.

  • Encryption - Not yet addressed.

Camera specification

Specification

OAK-D Pro / -W

OAK-D / -W

OAK-D-Lite

L515

D415

D430-D435

D450-D455

F455

F450

T261-T265

RGB

IMX378

IMX378/OV9782

IMX214

OV2740

OV2740

OV2740

OV9782

N/A

N/A

-

RGB HFOV

69 / 109

69 / 109

73.6

70

69

69

90

N/A

N/A

N/A

RGB Shutter

Rolling / Global

Rolling

Rolling

Rolling

Rolling

Rolling

Global

N/A

N/A

N/A

RGB resolution

12MP

12MP

13MP

2MP

2MP

2MP

1MP

N/A

N/A

N/A

Depth Type

Active Stereo

Passive Stereo

Passive Stereo

Laser

Active Stereo

Active Stereo

Active Stereo

Active Stereo

Active Stereo

N/A

Depth sensor

OV9282

OV9282

OV7251

-

OV2740

OV9282

OV9782

-

-

-

Stereo Shutter

Global

Global

Global

Rolling

Global

Global

Global

Depth HFOV

72 / 127

72 / 127

72

N/A

70

87

87

56

56

173

Min Depth

20 cm

20 cm

20 cm

25 cm

45 cm

28 cm

52 cm

30 cm

30 cm

N/A

Depth resolution

1280x800

1280x800

640x480

N/A

1024x768

1280x720

1280x720

848x800

IR LED

✔️

✔️

✔️

✔️

ToF/Laser

✔️

IMU

✔️

✔️

✔️

✔️/❌

✔️

✔️

Embedded use-case

Unlike RealSense™, our platform supports booting from flash (standalone mode) and features a 2-way SPI communication (SPIOut, SPIIn nodes). Standalone/on-the-edge mode means that you can flash your application to the device, which means that you don’t need to have the device connected to a host (RPi/PC/laptop…), more information here.

This allows users to build small, low-powered, embedded devices and integrate OAK SoM to upgrade their products with the power of Spatial AI.


Intel, Intel RealSense and Intel Movidius Myriad are trademarks of Intel Corporation or its subsidiaries.

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.