Platform

ON THIS PAGE

  • OAK vs RealSense™
  • Depth accuracy comparison
  • Comparison overview
  • 3rd party evaluation
  • On-device feature comparison
  • Features described
  • Modular design

OAK vs RealSense™

Compared to RealSense™ stereo cameras, OAK cameras feature a ton of on-device features (custom AI modes, tracking, scripting, encoding etc.).

Depth accuracy comparison

From our own evaluation (details at Stereo cameras accuracy comparison blog post), we have found out that our OAK-D-Pro camera has a slightly better depth accuracy compared to RealSense™ D435i / D455. OAK-D Long Range has much better accuracy at longer distances, as it has wider stereo baseline distance.For more information, please visit OAK Depth Accuracy comparison docs.

Comparison overview

SpecificationOAK-D Pro / -WOAK-D LiteOAK ToFD415D435D455
RGBIMX378IMX214IMX378OV2740OV2740OV9782
RGB HFOV66˚ / 109˚69˚66˚69˚69˚90˚
RGB ShutterRolling / GlobalRollingRollingRollingRollingGlobal
RGB resolution12MP13MP12MP2MP2MP1MP
Depth TypeActive StereoPassive StereoToFActive StereoActive StereoActive Stereo
Depth sensorOV9282OV725133D ToFOV2740OV9282OV9782
Stereo ShutterGlobalGlobal/GlobalGlobalGlobal
Stereo baseline75mm75mm/55mm55mm95mm
Depth HFOV72˚ / 127˚72˚70˚65˚87˚87˚
Min Depth20 cm20 cm20 cm45 cm28 cm52 cm
Depth resolution1280x800640x4801280x8001024x7681280x7201280x720
IR LED✔️-✔️---
ToF--✔️---
IMU✔️-✔️-✔️✔️

3rd party evaluation

A third party (customer) sent us their OAK evaluation results, comparing OAK-D Pro with RealSense™ D435i.
Passive stereo (no IR dot projector)
Active stereo (IR dot projector)

On-device feature comparison

On-Device CapabilitiesOAK-D ProOAK-D LiteD415D430-D435D450-D455F450
Custom AI models✔️✔️----
Object detection✔️✔️----
Object tracking✔️✔️----
On-device scripting✔️✔️----
Video/Image Encoding✔️✔️----
Image Manipulation✔️✔️----
Skeleton/Hand Tracking✔️✔️✔️✔️✔️-
Feature Tracking✔️✔️----
OCR✔️✔️----
Face Recognition✔️✔️---✔️

Features described

  • Custom AI models - You can run any AI/NN model(s) on the device, as long as all layers are supported. You can also choose from 200+ pretrained NN models from Open Model Zoo and DepthAI Model Zoo.
  • Object detection - Most popular object detectors have been converted and run on our devices. DepthAI supports onboard decoding of Yolo and MobileNet based NN models.
  • Object tracking - ObjectTracker node comes with 4 tracker types, and it also supports tracking of objects in 3D space.
  • On-device scripting - Script node enables users to run custom Python 3.9 scripts that will run on the device, used for managing the flow of the pipeline (business logic).
  • Video/Image encoding - VideoEncoder node allows encoding into MJPEG, H265, or H264 formats.
  • Image Manipulation - ImageManip node allows users to resize, warp, crop, flip, and thumbnail image frames and do type conversions (YUV420, NV12, RGB, etc.)
  • Skeleton/Hand Tracking - Detect and track key points of a hand or human pose. Geaxgx's demos: Hand tracker, Blazepose, Movenet.
  • 3D Semantic segmentation - Perceive the world with semantically-labeled pixels. DeeplabV3 demo here.
  • 3D Object Pose Estimation - MediaPipe's Objectron has been converted to run on OAK cameras. Video here.
  • 3D Edge Detection - EdgeDetector node uses Sobel filter to detect edges. With depth information, you can get physical position of these edges.
  • Feature Tracking - FeatureTracker node detects and tracks key points (features).
  • 3D Feature Tracking - With depth information, you can track these features in physical space.
  • OCR - Optical character recognition, demo here.
  • Face Recognition - Demo here, which runs face detection, alignment, and face recognition (3 different NN models) on the device simultaneously.

Modular design

Our platform was built from the ground up with the idea of being customizable. Many of our products based on OAK-SoM are open-source so you can easily redesign the board (see Integrating DepthAI into products), for example to change the stereo baseline distance or use a different image sensor (we support a bunch of different sensors).OAK Modules (FFCs) are great for prototyping, as it allows users to use different camera sensors/optics and place them at an ideal stereo baseline distance for their application.Below is a long-range disparity depth visualized over a color frame. This customer used narrow FOV M12 lenses with wide stereo baseline distance (25cm) to achieve such results with our platform.See stereo depth documentation on max depth perception calculations based on camera intrinsics/baseline distance.
Intel, Intel RealSense and Intel Movidius Myriad are trademarks of Intel Corporation or its subsidiaries.