OAK Visualizer
Visualizer allows you to view multiple camera streams and data outputs from your OAK device directly in a web browser. It provides a more flexible alternative to OpenCV'simshow()
function, with support for viewing multiple streams simultaneously and interactive controls. It is currently fully supported in the Chrome browser.Note that Visualizer works with the DepthAI API and is compatible with all OAK devices.Please note that all features of Visualizer work only in Chrome. All browsers should support RAW streams. Encoded streams and features based on them are supported only in Chrome because they require the full set of the WebCodecs API, which currently only Chrome provides with high quality and performance.

Getting Started
Basic Setup
Stereo Device Setup
Advanced Configuration
Basic Setup
To use the Visualizer, you need to create aRemoteConnection
in your DepthAI application:Python
1import depthai as dai
2
3# Create a RemoteConnection (Visualizer server)
4visualizer = dai.RemoteConnection()
5
6# Create pipeline
7with dai.Pipeline() as pipeline:
8 # Create camera node
9 # Build method without arguments will use the default camera
10 rgb_cam = pipeline.create(dai.node.Camera).build()
11
12 # Request NV12 stream with resolution of 1280x720 from the camera
13 rgb_stream = rgb_cam.requestOutput(size=(1280, 720), type=dai.ImgFrame.Type.NV12)
14
15 # Add camera stream as a topic to the Visualizer
16 visualizer.addTopic("rgb", rgb_stream)
17
18 # Build and start the pipeline
19 pipeline.start()
20
21 # Register the pipeline graph to be visualized in the Visualizer
22 visualizer.registerPipeline(pipeline)
23
24 # Main loop executed while the pipeline is running
25 while pipeline.isRunning():
26 # Check for key presses in the Visualizer
27 if visualizer.waitKey(1) == ord('q'):
28 # If 'q' is pressed, stop the pipeline
29 pipeline.stop()
http://localhost:8080
to view the streams.Features
- Multiple Stream Viewing - View all camera streams simultaneously in the browser
- Custom Message Queues - Create custom message queues for sending data to the Visualizer
- Remote Viewing - Access the Visualizer from any device on your network
API Reference
RemoteConnection Class
TheRemoteConnection
class is the main interface for the Visualizer:Python
1remote = dai.RemoteConnection(
2 address="0.0.0.0", # IP address to bind to
3 webSocketPort=8765, # WebSocket port
4 serveFrontend=True, # Whether to serve the frontend UI
5 httpPort=8080 # HTTP port for the web interface
6)
Adding Topics
You can add topics (streams) to the Visualizer in two ways. First, by adding a topic that streams the output of a node:Python
1remote.addTopic(
2 topicName="rgb", # Name shown in the UI
3 output=rgb_stream, # Node output to stream
4 group="Cameras", # Optional group name
5 useVisualizationIfAvailable=True # If present will call getVisualizationMessage method on the output objects to get the visualizations.
6)
Python
1rgb_queue = visualizer.addTopic(
2 topicName="rgb", # Name shown in the UI
3 group="Cameras", # Optional group name
4 maxSize=2, # Maximum size of the created queue
5 blocking=False, # Indicator whether the created queue will be blocking
6 useVisualizationIfAvailable=True # If present will call getVisualizationMessage method on the output objects to get the visualizations.
7)
8# Send custom data to the queue
9rgb_queue.send(my_data)
Integration with Neural Networks
Python
1# Create neural network node
2nn = pipeline.create(dai.node.DetectionNetwork).build(
3 input=rgb_cam, # Input camera node to use as input for the neural network
4 model="luxonis/yolov6-nano:r2-coco-512x288", # Model slug from the Luxonis Hub
5)
6nn.setConfidenceThreshold(0.5)
7
8# Add neural network results to Visualizer
9remote.addTopic("detections", nn.out, group="RGB")
Example Applications
Multi-Camera Streaming
Python
1# Create cameras
2rgb_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
3left_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
4right_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
5
6# Request NV12 streams from the cameras
7rgb_stream = rgb_cam.requestOutput(size=(1280, 720), type=dai.ImgFrame.Type.NV12)
8left_stream = left_cam.requestOutput(size=(800, 600), type=dai.ImgFrame.Type.NV12)
9right_stream = right_cam.requestOutput(size=(1280, 720), type=dai.ImgFrame.Type.NV12)
10
11# ...
12
13# Add all cameras to Visualizer
14remote.addTopic("rgb", rgb_stream, group="Cameras")
15remote.addTopic("left", left_stream, group="Cameras")
16remote.addTopic("right", right_stream, group="Cameras")
Object Detection Visualization
Python
1# Set up detection network
2nn = pipeline.create(dai.node.DetectionNetwork).build(
3 input=rgb_cam, # Input camera node to use as input for the neural network
4 model="luxonis/yolov6-nano:r2-coco-512x288", # Model slug from the Luxonis Hub
5)
6nn.setConfidenceThreshold(0.5)
7
8# Add RGB preview and detections to Visualizer
9remote.addTopic("preview", rgb_stream, group="Camera")
10remote.addTopic("detections", nn.out, group="Camera")