# Simplified Conversion for Yolos

## About

We created a tool to simplify the export process of the Yolo object detectors. Namely, our tools support the conversion of Yolos
ranging from V5 through V8 and Gold Yolo. You can choose from RVC2 or RVC3 conversion.

Upload the weights of the pre-trained model (.pt file), set an input image shape, choose the Robotics Vision Core generation of
the target device (learn more about parameters in [Options of
Tools](#Simplified%2520Conversion%2520for%2520Yolos-Options%2520of%2520Tools)) and we'll compile a blob and a JSON file with
information required by DepthAI to decode the results.

## Example

This example shows how to convert YoloV6n R4 and how to run the compiled model on an OAK device.

 1. First, we will start by downloading the model's weights. You can download them from
    [here](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6n.pt).

 2. Open the [tools](https://tools.luxonis.com/) in a browser of your choice. Then upload the downloaded yolov6n.pt weights and
    set the Input image shape to 640 352 (we choose this input image shape as the aspect ratio is close to 16:9 and throughput and
    latency are still decent). The rest of the options are left as they are.

 3. Click on the Submit button. The model will be automatically converted and downloaded inside a zip folder (the zip folder will
    contain a converted blob file, JSON file alongside a intermediate representation used to generate the output blob file).

 4. To run the exported model, we are going to use [OAK
    Examples](https://github.com/luxonis/oak-examples/tree/master/gen2-yolo/device-decoding). We will start by cloning the
    repository and installing the required packages by the gen2-yolo/device-decoding app.

```bash
git clone https://github.com/luxonis/oak-examples.git
cd oak-examples
git switch master
cd gen2-yolo/device-decoding
python3 -m pip install -r requirements.txt
```

 1. Extract the exported model files from result.zip inside the model folder of the app and run the following command to run the
    app.

```bash
python3 main.py --config model/yolov6n.json
```

## Options of Tools

 * Yolo Version - Required, which Yolo version conversion should be used. Inside tools, an automatic Yolo version detector is
   integrated, which, when you upload a model's weights, will automatically detect the Yolo version and set it.
 * RVC2 or RVC3 - Required, Robotics Vision Core generation of the target device.
 * File - Required, weights of a pre-trained model (.pt file), size needs to be smaller than 300Mb.
 * Input image shape - Required, integer for square input image shape, or width and height separated by space. It must be
   divisible by 32 (or 64, depending on the stride).
 * Shaves - Optional, default value is 6. Number of shaves used. To read more about shaves, please refer to
   [here](https://docs.luxonis.com/software/ai-inference/conversion.md).
 * Use OpenVINO 2021.4 - Optional, default value is true. This checkbox controls whether, during compilation to IR, the legacy
   frontend flag will be used. If off, defaults to OpenVINO 2022.1. Slight performance degradation was noticed with 2022.1.
   Therefore, we recommend to set it to true.
