Manual Conversion with SNPE (RVC4)
RVC4 conversion is based on the Qualcomm Neural Processing SDK ... SNPE tools. We provide a docker image with all the necessary tools installed. Install the ModelConverter and run:Command Line
1modelconverter shell rvc4
Overview
This is equivalent to starting a new Docker container from theluxonis/modelconverter-rvc4:latest
image and running it as an interactive terminal session (-it
) with the --rm
flag to ensure the container is automatically removed once the session is exited:Command Line
1docker run --rm -it \
2 -v $(pwd)/shared_with_container:/app/shared_with_container/ \
3 luxonis/modelconverter-rvc4:latest
Conversion to DLC
First, convert the model to the deep learning container (DLC) format. The following source model formats are supported:- ONNX
- TensorFlow Lite
Command Line
1snpe-onnx-to-dlc --input_network <path to the .onnx model>
Command Line
1snpe-tflite-to-dlc --input_network <path to the .tflite model>
Consult the
snpe-onnx-to-dlc --help
or snpe-tflite-to-dlc --help
for the full list of conversion options or visit the official documentation.Quantization (optional)
Second, the converted model is quantized to UINT8 precision. This step is optional and can be skipped if no quantization is desired.Command Line
1snpe-dlc-quant --input_dlc <path to the .dlc model>
Consult the
snpe-dlc-quant --help
for the full list of quantization options or visit the official documentation.Graph preparation
Third, the model is prepared to run on DSP/HTP accelerators.Command Line
1snpe-dlc-graph-prepare --input_dlc <path to the (un-)quantized .dlc model> --htp_socs sm8550
Consult the
snpe-dlc-graph-prepare --help
for the full list of graph preparation options or visit the official documentation.