• Multi-Device Setup
  • Discovering OAK cameras
  • Selecting a Specific DepthAI device to be used
  • Specifying POE device to be used
  • Timestamp syncing
  • Point cloud fusion
  • Multi camera calibration

Multi-Device Setup

You can find Demo scripts here. Learn how to discover multiple OAK cameras connected to your system, and use them individually.

Discovering OAK cameras

You can use DepthAI to discover all connected OAK cameras, either via USB or through the LAN (OAK POE cameras). The code snippet below finds all OAK cameras and prints their MxIDs (unique identifier) and their XLink state.
1import depthai
2for device in depthai.Device.getAllAvailableDevices():
3    print(f"{device.getMxId()} {device.state}")
Example results for 2x DepthAI on a system:
Command Line
114442C10D13EABCE00 XLinkDeviceState.X_LINK_UNBOOTED
214442C1071659ACD00 XLinkDeviceState.X_LINK_UNBOOTED

Selecting a Specific DepthAI device to be used

From the Detected devices(s) above, use the following code to select the device you would like to use with your pipeline. For example, if the first device is desirable from above use the following code:
1# Specify MXID, IP Address or USB path
2device_info = depthai.DeviceInfo("14442C108144F1D000") # MXID
3#device_info = depthai.DeviceInfo("") # IP Address
4#device_info = depthai.DeviceInfo("3.3.3") # USB port name
5with depthai.Device(pipeline, device_info) as device:
6    # ...
And you can use this code as a basis for your own use cases, such that you can run differing neural models on different OAK models.

Specifying POE device to be used

You can specify the POE device to be used by the IP address as well, as shown in the code snippet above.Now use as many OAK cameras as you need! And since DepthAI does all the heavy lifting, you can usually use quite a few of them with very little burden to the host.

Timestamp syncing

Timestamp synchronization, alternatively referred to as message syncing, involves aligning messages from various sensors, including frames, IMU packets, ToF data, and more.More information about timestamp synchronization can be found on the Frame synchronization page .

Point cloud fusion

Point cloud fusion is the process of integrating multiple point clouds into one cohesive representation. Widely applied in robotics, autonomous vehicles, virtual reality, and 3D mapping, its aim is to produce a more comprehensive and accurate scene depiction by merging data from various viewpoints or sensors.How point clouds from different cameras can be merged together is demonstrated by this demo. Please visit the GitHub page from the link below. Please note that before you can run this demo (or any other multi camera application), you need to calibrate the cameras and generate the calibration files for each camera. Please see the next paragraph.

Point cloud fusion on GitHub

GitHub logo

Multi camera calibration

This example demonstrates how to compute extrinsic parameters (pose of the camera) for multiple cameras. It provides a practical illustration of how to determine the relative positions and orientations of different cameras in a multi-camera setup. By accurately estimating the extrinsic parameters, we can ensure that the images captured by each camera are correctly aligned and can be effectively combined for further processing and analysis.

Multiple camera calibration on GitHub

GitHub logo