# AI

## Overview

Luxonis devices are powered by dedicated (RVC) hardware, enabling real-time, low-latency AI inference directly at the edge.

To support this, we provide a complete toolchain for working with AI models: sourcing or training them, converting them into
device-compatible formats, deploying them for inference, and evaluating their performance.

## AI Components & Flow

This section explains how the individual components fit together to enable working with AI models across the Luxonis AI ecosystem.

For a high-level understanding of this flow, refer to the diagram below before diving into the individual topics.

### Model Source

AI models can be obtained from various sources. Check out our [Model
Zoo](https://docs.luxonis.com/software-v3/ai-inference/model-source/zoo.md) to unlock a world of possibilities with a diverse
range of pre-trained models.

Alternatively, use our [Training](https://docs.luxonis.com/software-v3/ai-inference/model-source/training.md) tools to build the
models tailored for your applications.

### Conversion

Prior to running on Luxonis devices, AI models must be transformed to a format compatible with the RVC Platform of interest. Check
out the [Conversion](https://docs.luxonis.com/software-v3/ai-inference/conversion.md) section for more information.

### Inference

The process of running model inference on Luxonis devices is orchestrated by DepthAI pipelines. Check out the
[Inference](https://docs.luxonis.com/software-v3/ai-inference/inference.md) section for more information on setting up the
inference pipelines and post-processing the model outputs to derive meaningful insights and actionable data.

### Benchmarking

Evaluate AI model performance on Luxonis devices using our
[Benchmarking](https://docs.luxonis.com/software-v3/ai-inference/benchmarking.md) tools. Measure key metrics including inference
speed, power consumption, and resource utilization to make informed decisions about model selection and optimization for your
applications.

### NN Archive

Our in-house format for a standardized way of packaging AI models. [NN
Archive](https://docs.luxonis.com/software-v3/ai-inference/nn-archive.md) stores model executable and configuration files together
in order to simplify setting-up the model during runtime and conversion.

### Integrations

On top of the options above, we also provide integrations with external sources to simplify the usage of external AI models inside
our platform. Check out the [Integrations](https://docs.luxonis.com/software-v3/ai-inference/integrations.md) section for more
information.
