Luxonis devices are powered by dedicated (RVC) hardware, enabling real-time, low-latency AI inference directly at the edge. To support this, we provide a complete toolchain for working with AI models: sourcing or training them, converting them into device-compatible formats, deploying them for inference, and evaluating their performance. This section explains how the individual components fit together to enable working with AI models across the Luxonis AI ecosystem.For a high-level understanding of this flow, refer to the diagram below before diving into the individual topics.
Model Source
AI models can be obtained from various sources. Check out our Model Zoo to unlock a world of possibilities with a diverse range of pre-trained models. Alternatively, use our Training tools to build the models tailored for your applications.
Conversion
Prior to running on Luxonis devices, AI models must be transformed to a format compatible with the RVC Platform of interest. Check out the Conversion section for more information.
Inference
The process of running model inference on Luxonis devices is orchestrated by DepthAI pipelines. Check out the Inference section for more information on setting up the inference pipelines and post-processing the model outputs to derive meaningful insights and actionable data.
Benchmarking
Evaluate AI model performance on Luxonis devices using our Benchmarking tools. Measure key metrics including inference speed, power consumption, and resource utilization to make informed decisions about model selection and optimization for your applications.
NN Archive
Our in-house format for a standardized way of packaging AI models. NN Archive stores model executable and configuration files together in order to simplify setting-up the model during runtime and conversion.
Integrations
On top of the options above, we also provide integrations with external sources to simplify the usage of external AI models inside our platform. Check out the Integrations section for more information.