ON THIS PAGE

  • Manual Conversion with SNPE (RVC4)
  • Overview
  • Conversion to DLC
  • Quantization (optional)
  • Graph preparation

Manual Conversion with SNPE (RVC4)

RVC4 conversion is based on the Qualcomm Neural Processing SDK ... SNPE tools. We provide a docker image with all the necessary tools installed. Install the ModelConverter and run:
Command Line
1modelconverter shell rvc4

Overview

This is equivalent to starting a new Docker container from the luxonis/modelconverter-rvc4:latest image and running it as an interactive terminal session (-it) with the --rm flag to ensure the container is automatically removed once the session is exited:
Command Line
1docker run --rm -it \
2    -v $(pwd)/shared_with_container:/app/shared_with_container/ \
3    luxonis/modelconverter-rvc4:latest
In the following section, we explain the conversion process step-by-step.

Conversion to DLC

First, convert the model to the deep learning container (DLC) format. The following source model formats are supported:
  • ONNX
  • TensorFlow Lite
If converting from ONNX, run:
Command Line
1snpe-onnx-to-dlc --input_network <path to the .onnx model>
If converting from TFLite, run:
Command Line
1snpe-tflite-to-dlc --input_network <path to the .tflite model>
Note that no prior model simplification is required as SNPE handles that automatically.

Quantization (optional)

Second, the converted model is quantized to UINT8 precision. This step is optional and can be skipped if no quantization is desired.
Command Line
1snpe-dlc-quant --input_dlc <path to the .dlc model>

Graph preparation

Third, the model is prepared to run on DSP/HTP accelerators.
Command Line
1snpe-dlc-graph-prepare --input_dlc <path to the (un-)quantized .dlc model> --htp_socs sm8550