ON THIS PAGE

  • AI Capabilities
  • AI building blocks
  • Typical use cases
  • Code-less quick start with App Store
  • Guides and examples

AI Capabilities

Luxonis devices run neural networks fully on-device for classification, detection, segmentation, and custom vision tasks—keeping latency low and data private while freeing the host for control logic.Across RVC2 and RVC4, you can run networks with NeuralNetwork and DetectionNetwork, deploy models in NNArchive or blob formats, and source them from Model Zoo, custom training, or integrations.OAK cameras with RVC2 deliver efficient on-device inference with balanced performance for common detection, segmentation, and classification pipelines.OAK4 devices with RVC4 add more TOPS, faster preprocessing, and native superblob/NNArchive support, enabling heavier backbones (e.g., YOLO-E, DepthAnythingV2, ...), multi-model pipelines on a single device, and NeuralDepth.

AI building blocks

Typical use cases

  1. Object detection & tracking: run YOLO via DetectionNetwork on RGB and feed IDs into ObjectTracker for persistent tracklets (optionally spatial with depth).
  2. Pose/landmarks & overlays: use ParsingNeuralNetwork which automatically parses the outputs in standard message types.
  3. Segmentation & fusion: run pixelwise models, then fuse with depth or spatial perception to measure volume, occupancy, or free space.

Code-less quick start with App Store

Data Collection App

On-device open-vocabulary detection (YOLOE via DepthAI) auto-collects snaps and lets the UI pick labels, set confidence, and enable snap conditions.
Open Data Collection App

Guides and examples

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.