Platform

ON THIS PAGE

  • OAK vs StereoLabs ZED
  • Overview
  • Depth accuracy comparison
  • Host requirements
  • OAK Cameras
  • ZED Cameras
  • Comparison overview
  • Modular design
  • On-device feature comparison

OAK vs StereoLabs ZED

Compared to StereoLabs ZED cameras, OAK cameras feature a ton of on-device features (stereo depth estimation/disparity matching, custom AI models, object tracking, scripting, encoding etc.).

Overview

Here's a quick comparison of on-device capabilities between OAK cameras and ZED cameras. More details can be found at at On-device feature comparison.
On-device capabilityOAK CamerasZED cameras
Camera ISP✔️✔️
Stereo matching✔️-
Stereo postprocessing✔️-
AI processing✔️-
CV processing✔️-
Video encoding✔️-
Essentially, ZED cameras require a powerful host computer with beefy NVIDIA GPU to process the stereo data and run AI models. OAK cameras can do all the processing on-device, eliminating the need for a powerful host (see Host requirements for details).

Depth accuracy comparison

From our own evaluation (details at Stereo cameras accuracy comparison blog post), we found that among the tested cameras, the OAK-D Long Range delivers the most impressive long-range depth accuracy. Its 15cm baseline significantly reduces depth errors, making it ideal for applications requiring extended-range perception.Camera Performance Overview
  • OAK-D Long Range: Excels in long-range performance with a 15cm baseline, ensuring great depth accuracy and minimal errors at greater distances.
  • ZED 2i: Performs well at long ranges, benefiting from a 12cm baseline and high 2K resolution.
  • OAK-D Pro: Provides a balanced active stereo depth estimation performance across various distances with its 7.5cm baseline.
For more details, please visit Depth Accuracy comparison docs.

Host requirements

OAK Cameras

With most processing occurring on the device, OAK cameras can efficiently run on low-powered host computers like the Raspberry Pi Zero.

ZED Cameras

  • No Onboard Compute: All AI and depth processing are handled by a connected host, which demands considerably more processing power.
  • Minimum System Requirements for ZED Series (ZED 2i, etc.):
    • CPU: Dual-core 2.3GHz or faster
    • RAM: At least 4GB
    • GPU: NVIDIA GPU with Compute Capability > 3.0 (required for real-time depth processing)
  • ZED X Series (ZED X, ZED X Mini) Requirements
    • Requires NVIDIA Jetson AGX Orin / Orin NX for operation

Comparison overview

SpecificationOAK-D Pro / -WOAK-D LiteOAK ToFZED MiniZED 2i 4mm / 2.1mm
RGBIMX378IMX214IMX378OV4689OV4689
RGB HFOV66˚ / 109˚69˚66˚102˚72˚ / 110˚
RGB ShutterRolling / GlobalRollingRollingRollingRolling
RGB resolution12MP13MP12MP4MP4MP
Depth TypeActive StereoPassive StereoToFPassive StereoPassive Stereo
Depth sensorOV9282OV725133D ToFOV2740OV9282
Stereo ShutterGlobalGlobal/RollingRolling
Stereo baseline7.5cm7.5cm/6.3cm12cm
Depth HFOV72˚ / 127˚72˚70˚102˚72˚ / 110˚
Min Depth20 cm20 cm20 cm10 cm150 cm / 30 cm
Depth resolution1280x800640x4801280x8001920x10801920x1080
IR LED✔️-✔️--
ToF--✔️--
IMU✔️-✔️✔️✔️
Barometer----✔️

Modular design

Our platform was built from the ground up with the idea of being customizable. All of our products based on OAK-SoM are open-source so you can easily redesign the board (see Integrating DepthAI into products), for example to change the stereo baseline distance or use a different image sensor (we support a bunch of different sensors).OAK FFC line is great for prototyping, as it allows users to use different camera sensors/optics and place them at an ideal stereo baseline distance for their application.Below is a long-range disparity depth visualized over a color frame. This customer used narrow FOV M12 lenses with wide stereo baseline distance (25cm) to achieve such results with our platform.See stereo depth documentation on max depth perception calculations based on camera intrinsics/baseline distance.

On-device feature comparison

OAK cameras integrate a wide range of advanced processing capabilities directly on-device, eliminating the need for a powerful external host. In contrast, StereoLabs™ ZED cameras rely entirely on host-based processing. Here's a snapshot of what Luxonis offers:
  • Custom AI models - You can run any AI/NN model(s) on the device, as long as all layers are supported. You can also choose from 200+ pretrained NN models from Open Model Zoo and DepthAI Model Zoo.
  • Object detection - Most popular object detectors have been converted and run on our devices. DepthAI supports onboard decoding of Yolo and MobileNet based NN models.
  • Object tracking - ObjectTracker node comes with 4 tracker types, and it also supports tracking of objects in 3D space.
  • On-device scripting - Script node enables users to run custom Python 3.9 scripts that will run on the device, used for managing the flow of the pipeline (business logic).
  • Video/Image encoding - VideoEncoder node allows encoding into MJPEG, H265, or H264 formats.
  • Image Manipulation - ImageManip node allows users to resize, warp, crop, flip, and thumbnail image frames and do type conversions (YUV420, NV12, RGB, etc.)
  • Skeleton/Hand Tracking - Detect and track key points of a hand or human pose. Geaxgx's demos: Hand tracker, Blazepose, Movenet.
  • 3D Semantic segmentation - Perceive the world with semantically-labeled pixels. DeeplabV3 demo here.
  • 3D Object Pose Estimation - MediaPipe's Objectron has been converted to run on OAK cameras. Video here.
  • 3D Edge Detection - EdgeDetector node uses Sobel filter to detect edges. With depth information, you can get physical position of these edges.
  • Feature Tracking - FeatureTracker node detects and tracks key points (features).
  • 3D Feature Tracking - With depth information, you can track these features in physical space.
  • OCR - Optical character recognition, demo here.