Understanding the core components and terminologies is essential for effective utilization of our conversion tools. This page introduces several key concepts that you will encounter as you navigate through the documentation.
Conversion
Conversion is a process of converting the model format. The format dictates model storage and deployment.
RVC Platform
Historically, Luxonis devices were released in multiple generations. We refer to them as to Robotics Vision Core (RVC) Platforms each coming with its own set of capabilities and requirements. Find more information in the Hardware section.
RVC Compiled Formats
Model formats suitable for specific RVC platform:
RVC2 - .superblob (and .blob as legacy)
RVC3 - .blob
RVC4 - .dlc
Quantization
Quantization is a method used to make AI models smaller, faster, and more efficient. This is achieved by reducing the precision of the model's weights and activations.
Why Quantization Matters
Modern AI models often use large amounts of numerical data, which by default is stored in 32-bit floating-point format (a precise but memory-heavy way to store numbers). While this precision is helpful during training, it's often more than necessary when running the model in real-world applications. This simplification leads to:
Identify the range of input data (true-min and -max values).
Determine the encoding range (encoding-min and -max values), ensuring it meets specific requirements (e.g., zero representation).
Convert each 32-bit floating-point value to either:
a 16-bit floating-point value, or
an 8-bit fixed-point value between 0 and 255.
Calibration Data
Data is used to determine the input range and the encoding range of the model's weights and activations during quantization. A well-choosen calibration data allows reduction of the precision without significant loss of information. To achieve this, it must:
be representative of the data the model encountered during training, and
cover a full range of values the model is expected to handle.
It is recommended that you build out the calibration dataset from validation images. Using a larger, more diverse calibration set should mean better general performance of the model once quantized.