AI models are algorithms designed to perform specific tasks by learning from data. Most of the SOTA models nowadays are neural networks. These include various architectures like feedforward, convolutional, and recurrent neural networks used for tasks such as image recognition and natural language processing.Luxonis devices are equipped with powerful (RVC) chips that enable them to run AI model inference. We document here how to obtain, convert, and run AI model inference on a Luxonis device of choice. For a clearer understanding of how our components connect, refer to the diagram below before diving into the full documentation.
Model Source
AI models can be obtained from various sources. Check out our Model ZOO to unlock a world of possibilities with a diverse range of pre-trained models. Alternatively, use our Training tools to build the models tailored for your applications.
Conversion
Prior to running on Luxonis devices, AI models must be transformed to a format compatible with the RVC Platform of interest. Check out the Conversion section for more information.
Inference
The process of running model inference on Luxonis devices is orchestrated by DepthAI pipelines. Check out the Inference section for more information on setting up the inference pipelines and post-processing the model outputs to derive meaningful insights and actionable data.
NN Archive
Our in-house format for a standardized way of packaging AI models. NN Archive stores model executable and configuration files together in order to simplify setting-up the model during runtime and conversion.
Integrations
On top of the options above, we also provide integrations with external sources to simplify the usage of external AI models inside our platform. Check out the Integrations section for more information.