# Integration with Roboflow

Roboflow is a tool for building and deploying custom computer vision models. When combined with Luxonis OAK devices, it allows for
effective deployment of various models.

> Note that we currently only support object detection tasks when deploying Roboflow models on the OAK devices.

## Installation

To be able to deploy your models, you will need to install roboflowoak, depthai, and opencv-python:

```bash
pip install roboflowoak
pip install depthai
pip install opencv-python
```

## Deploying a Model from Roboflow

Deploy your Roboflow object detection model on OAK devices as shown in the example below. Consider that you will need to modify
certain parameters to suit your specific model and requirements:

 * model: Replace this with the ID of your model in Roboflow.
 * version: Insert the specific version number of your model.
 * api_key: Use the private API key provided by Roboflow for your account.

```python
from roboflowoak import RoboflowOak
import cv2
import time
import numpy as np

if __name__ == '__main__':
    # instantiating an object (rf) with the RoboflowOak module
    rf = RoboflowOak(model="YOUR-MODEL-ID", confidence=0.05, overlap=0.5,
    version="YOUR-MODEL-VERSION-#", api_key="YOUR-PRIVATE_API_KEY", rgb=True,
    depth=True, device=None, blocking=True)
    # Running our model and displaying the video output with detections
    while True:
        t0 = time.time()
        # The rf.detect() function runs the model inference
        result, frame, raw_frame, depth = rf.detect()
        predictions = result["predictions"]
        # {
        #     predictions:
        #     [ {
        #         x: (middle),
        #         y:(middle),
        #         width:
        #         height:
        #         depth: ###->
        #         confidence:
        #         class:
        #         mask: {
        #     ]
        # }
        # frame - frame after preprocs, with predictions
        # raw_frame - original frame from your OAK
        # depth - depth map for raw_frame, center-rectified to the center camera
        
        # timing: for benchmarking purposes
        t = time.time()-t0
        print("FPS ", 1/t)
        print("PREDICTIONS ", [p.json() for p in predictions])

        # setting parameters for depth calculation
        # comment out the following 2 lines out if you're using an OAK without Depth
        max_depth = np.amax(depth)
        cv2.imshow("depth", depth/max_depth)
        # displaying the video feed as successive frames
        cv2.imshow("frame", frame)
    
        # how to close the OAK inference window / stop inference: CTRL+q or CTRL+c
        if cv2.waitKey(1) == ord('q'):
            break
```

For additional information, refer to the official Roboflow tutorial available
[here](https://docs.roboflow.com/deploy/luxonis-oak).
