Overview

Meet the Luxonis stack in one place. OAK cameras let you capture synchronized RGB, depth, and motion data, run AI models directly on the device, and turn perception into real-world measurements and actions. Use this page as a starting point for understanding what OAK devices can do and where to go next - AI inference on-device, depth and spatial perception, tracking, and fleet-scale remote management.

Top-level features

AI Capabilities

Run detection, segmentation, and custom models fully on-device with low latency and high privacy.
Explore AI capabilities

Depth

Get metric 3D awareness with stereo, neural, or ToF depth on compatible OAK devices.
Explore depth

Feature Tracking

Track keypoints in real time for odometry, stabilization, and AR anchoring.
Explore feature tracking

Object Tracking

Assign stable IDs to detections and keep object identity through motion and occlusion.
Explore object tracking

Spatial Perception

Combine depth, motion, and AI to understand where things are in 3D space.
Explore spatial perception

Remote Management

Deploy apps, stream telemetry, and update fleets using Hub and oakctl.
Explore remote management

Where to start

  • New to OAK? Start with AI Capabilities and Depth, then expand into Object Tracking and Spatial Perception for end-to-end spatial AI pipelines.
  • Building for motion or mapping? Begin with Feature Tracking, then combine it with Depth for robust navigation and AR use cases.
  • Scaling devices? Jump straight to Remote Management to learn deployment, telemetry, and OTA workflows.