# OAK Visualizer

Visualizer allows you to view multiple camera streams and data outputs from your OAK device directly in a web browser. It provides
a more flexible alternative to OpenCV's imshow() function, with support for viewing multiple streams simultaneously and
interactive controls. It is currently fully supported in the Chrome browser.

Note that Visualizer works with the [DepthAI API](https://docs.luxonis.com/software-v3/depthai.md) and is compatible with all OAK
devices.

> Please note that all features of Visualizer work only in Chrome. All browsers should support RAW streams. Encoded streams and
features based on them are supported only in Chrome because they require the full set of the
> [WebCodecs API](https://developer.mozilla.org/en-US/docs/Web/API/VideoDecoder)
> , which currently only Chrome provides with high quality and performance.

## Getting Started

#### Basic Setup

To use the Visualizer, you need to create a RemoteConnection in your DepthAI application:

```python
import depthai as dai

# Create a RemoteConnection (Visualizer server)
visualizer = dai.RemoteConnection()

# Create pipeline
with dai.Pipeline() as pipeline:
    # Create camera node
    # Build method without arguments will use the default camera
    rgb_cam = pipeline.create(dai.node.Camera).build()

    # Request NV12 stream with resolution of 1280x720 from the camera
    rgb_stream = rgb_cam.requestOutput(size=(1280, 720), type=dai.ImgFrame.Type.NV12)

    # Add camera stream as a topic to the Visualizer
    visualizer.addTopic("rgb", rgb_stream)

    # Build and start the pipeline
    pipeline.start()

    # Register the pipeline graph to be visualized in the Visualizer
    visualizer.registerPipeline(pipeline)

    # Main loop executed while the pipeline is running
    while pipeline.isRunning():
        # Check for key presses in the Visualizer
        if visualizer.waitKey(1) == ord('q'):
            # If 'q' is pressed, stop the pipeline
            pipeline.stop()
```

After running this code, open your browser to http://localhost:8080 to view the streams.

#### Stereo Device Setup

To use the Visualizer, you need to create a RemoteConnection in your DepthAI application:

```python
import depthai as dai

# Create a RemoteConnection (Visualizer server)
visualizer = dai.RemoteConnection()

# Create pipeline
with dai.Pipeline() as pipeline:
    # Create camera nodes
    rgb_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
    left_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
    right_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)

    # Request NV12 streams from the cameras
    rgb_stream = rgb_cam.requestOutput(size=(1280, 720), type=dai.ImgFrame.Type.NV12)
    left_stream = left_cam.requestOutput(size=(800, 600), type=dai.ImgFrame.Type.NV12)
    right_stream = right_cam.requestOutput(size=(800, 600), type=dai.ImgFrame.Type.NV12)

    # Add camera streams as topics to the Visualizer
    visualizer.addTopic("rgb", rgb_stream)
    visualizer.addTopic("left", left_stream)
    visualizer.addTopic("right", right_stream)

    # Build and start the pipeline
    pipeline.start()

    # Register the pipeline graph to be visualized in the Visualizer
    visualizer.registerPipeline(pipeline)

    # Main loop executed while the pipeline is running
    while pipeline.isRunning():
        # Check for key presses in the Visualizer
        if visualizer.waitKey(1) == ord('q'):
            # If 'q' is pressed, stop the pipeline
            pipeline.stop()
```

After running this code, open your browser to http://localhost:8080 to view the streams.

#### Advanced Configuration

You can customize the Visualizer's behavior with additional parameters:

```python
# Custom RemoteConnection configuration
remote = dai.RemoteConnection(
    address="0.0.0.0",      # Bind to all interfaces
    webSocketPort=8765,     # Custom WebSocket port
    serveFrontend=True,     # Enable the web UI
    httpPort=8080           # Custom HTTP port
)

# Add streams with groups for organization
remote.addTopic("rgb", rgb_stream, group="RGB")
remote.addTopic("left", left_stream, group="Left")
remote.addTopic("right", right_stream, group="Right")

# Add neural network results
# Grouping the detections with the rgb stream will ensure the detections are not overlayed on the left/right stream
remote.addTopic("detections", nn.out, group="RGB")
```

## Features

 * Multiple Stream Viewing - View all camera streams simultaneously in the browser
 * Custom Message Queues - Create custom message queues for sending data to the Visualizer
 * Remote Viewing - Access the Visualizer from any device on your network

## API Reference

### RemoteConnection Class

The RemoteConnection class is the main interface for the Visualizer:

```python
remote = dai.RemoteConnection(
    address="0.0.0.0",      # IP address to bind to
    webSocketPort=8765,     # WebSocket port
    serveFrontend=True,     # Whether to serve the frontend UI
    httpPort=8080           # HTTP port for the web interface
)
```

### Adding Topics

You can add topics (streams) to the Visualizer in two ways. First, by adding a topic that streams the output of a node:

```python
remote.addTopic(
    topicName="rgb",              # Name shown in the UI
    output=rgb_stream,            # Node output to stream
    group="Cameras",              # Optional group name
    useVisualizationIfAvailable=True  # If present will call getVisualizationMessage method on the output objects to get the visualizations.
)
```

Or create an output queue whose contents will be streamed:

```python
rgb_queue = visualizer.addTopic(
    topicName="rgb",              # Name shown in the UI 
    group="Cameras",              # Optional group name
    maxSize=2,                    # Maximum size of the created queue
    blocking=False,               # Indicator whether the created queue will be blocking
    useVisualizationIfAvailable=True  # If present will call getVisualizationMessage method on the output objects to get the visualizations.
)
# Send custom data to the queue
rgb_queue.send(my_data)
```

### Integration with Neural Networks

```python
# Create neural network node
nn = pipeline.create(dai.node.DetectionNetwork).build(
    input=rgb_cam, # Input camera node to use as input for the neural network
    model="luxonis/yolov6-nano:r2-coco-512x288", # Model slug from the Luxonis Hub
)
nn.setConfidenceThreshold(0.5)

# Add neural network results to Visualizer
remote.addTopic("detections", nn.out, group="RGB")
```

## Example Applications

### Multi-Camera Streaming

```python
# Create cameras
rgb_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
left_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
right_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)

# Request NV12 streams from the cameras
rgb_stream = rgb_cam.requestOutput(size=(1280, 720), type=dai.ImgFrame.Type.NV12)
left_stream = left_cam.requestOutput(size=(800, 600), type=dai.ImgFrame.Type.NV12)
right_stream = right_cam.requestOutput(size=(1280, 720), type=dai.ImgFrame.Type.NV12)

# ...

# Add all cameras to Visualizer
remote.addTopic("rgb", rgb_stream, group="Cameras")
remote.addTopic("left", left_stream, group="Cameras")
remote.addTopic("right", right_stream, group="Cameras")
```

### Object Detection Visualization

```python
# Set up detection network
nn = pipeline.create(dai.node.DetectionNetwork).build(
    input=rgb_cam, # Input camera node to use as input for the neural network
    model="luxonis/yolov6-nano:r2-coco-512x288", # Model slug from the Luxonis Hub
)
nn.setConfidenceThreshold(0.5)

# Add RGB preview and detections to Visualizer
remote.addTopic("preview", rgb_stream, group="Camera")
remote.addTopic("detections", nn.out, group="Camera")
```
