# On Device Programming

While regular (firmware) on-device development is not possible due to the closed nature of native tooling, we still expose a
couple of alternative ways of running custom code:

 1. Scripting - Using Python 3.9 with [Script
    node](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/script.md)
 2. Creating your own NN model to run more computationally heavy features
 3. Creating custom [OpenCL](https://en.wikipedia.org/wiki/OpenCL) kernels

## Using Script node

Using [Script node](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/script.md) allows you to run custom
Python scripts on the device itself, which allows users greater flexibility when constructing pipelines.

Script node is also very useful when using multiple neural networks in series and you need to process the output of the first one
before feeding an image to the second one. Example here would be [face age/gender
recognition](https://github.com/luxonis/oak-examples/tree/master/gen2-age-gender) demo - the first NN would detect faces, pass
detections to the Script node which would create
[ImageManipConfig](https://docs.luxonis.com/software-v3/depthai/depthai-components/messages/image_manip_config.md) to crop the
original frame and feed the [face age/gender
recognition](https://docs.openvinotoolkit.org/latest/omz_models_model_age_gender_recognition_retail_0013.html) NN only the cropped
face frame.

For running computationally heavy features (e.g., image filters), due to performance reasons you might want to avoid using Script
node and rather go with one of the two options described below.

## Creating custom NN models

You can create custom models with your favorite NN library, convert the model into OpenVINO, and then compile it into the .blob.
More information on this topic can be found in the [Converting model to MyriadX
blob](https://docs.luxonis.com/software-v3/ai-inference/conversion.md) documentation.

Refer to the [Computer Vision](https://docs.luxonis.com/software-v3/depthai/perception/computer-vision.md) page to find out more.

### Supported layers

When converting your model to OpenVINO's IR format (.bin and .xml), you have to check if OpenVINO supports the layers that were
used. Here are the supported layers and their limitations for various frameworks:

 * [Caffe](https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html#caffe-supported-layers)
 * [MXNet](https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html#mxnet-supported-symbols)
 * [TensorFlow](https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html#tensorflow-supported-operations)
 * [TensorFlow 2
   Keras](https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html#tensorflow-2-keras-supported-operations)
 * [Kaldi](https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html#kaldi-supported-layers)
 * [ONNX](https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html#onnx-supported-operators)

## Creating custom OpenCL kernel

Creating custom NN models has some limitations, for example, [Supported
layers](#On%2520Device%2520Programming-Creating%2520custom%2520NN%2520models-Supported%2520layers) by OpenVINO/VPU. To avoid these
limitations, you could consider creating a custom OpenCL kernel and compile it for the VPU. This kernel will run on SHAVE core(s)
on the VPU. One should also take into account that this option is not super user-friendly. We plan on creating a tutorial on how
to develop these and run them on OAK cameras.

 * [Tutorial on how to implement custom layers with
   OpenCL](https://docs.openvinotoolkit.org/2021.1/openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel.html) by OpenVINO
 * [Custom kernel implementations in
   OpenCL](https://github.com/openvinotoolkit/openvino/tree/2021.4.2/inference-engine/src/vpu/custom_kernels)
