ON THIS PAGE

  • DepthAI ROS Driver
  • Launch files
  • Publishing TFs from extrinsics
  • Available sensors and their resolutions:
  • Setting RGB parameters
  • Setting Stereo parameters
  • Depth alignment
  • Stereo socket order
  • Custom Sensor sockets
  • Feature Tracker
  • Setting IMU parameters
  • Stopping/starting camera for power saving/reconfiguration
  • Pipeline parameters
  • Neural networks
  • Syncing
  • Specific camera configurations
  • ToF Cameras
  • RealSense compatibility
  • Example
  • RTABMap example with RealSense compatibility
  • Before
  • After
  • Recalibration
  • Using external sources for NN inference or Stereo Depth
  • Example scenarios
  • Developers guide
  • List of parameters

DepthAI ROS Driver

Currently, recommended way to launch cameras is to use executables from depthai_ros_driver package.This runs your camera as a ROS2 Component/ROS Nodelet and gives you the ability to customize your camera using ROS parameters.Main API difference between ROS2 and ROS1 is that parameter names use different convention - parameters in ROS2 are namespaced using . and parameters in ROS1 are namespaced using _, for example in ROS2: camera.i_pipeline_type, ROS1 camera_i_pipeline_type. For sake of simplicity, this Readme uses ROS2 convention. List of all available parameters with their default values can be found at the bottom of Readme.Parameters that begin with r_ can be freely modified during runtime, for example with rqt.Parameters that begin with i_ are set when camera is initializing, to change them you have to call stop and start services. This can be used to hot swap NNs during runtime, changing resolutions, etc.

Launch files

We provide few examples for depthai-ros-driver package:
  • camera.launch.py launches camera in RGBD, and NN in spatial (Mobilenet) mode.
  • rgbd_pcl.launch.py launches camera in basic RGBD configuration, doesn't load any NNs. Also loads ROS depth processing nodes for RGBD pointcloud.
  • example_multicam.launch.py launches several cameras at once, each one in different container. Edit the multicam_example.yaml config file in config directory to change parameters
  • example_segmentation.launch.py launches camera in RGBD + semantic segmentation (pipeline type=RGBD, nn_type=rgb)
  • pointcloud.launch.py - similar to rgbd_pcl.launch.py, but doesn't use RGB component for pointcloud
  • example_marker_publish.launch.py launches camera.launch.py + small python node that publishes detected objects as markers/tfs
  • rtabmap.launch.py launches camera and RTAB-MAP RGBD SLAM (you need to install it first - sudo apt install ros-$ROS_DISTRO-rtabmap-ros). You might need to set manual focus via parameters here.
To launch them, you can run:
  • ros2 launch depthai_ros_driver camera.launch.py (ROS2)
  • roslaunch depthai_ros_driver camera.launch (ROS1)

Publishing TFs from extrinsics

By default, camera transforms are published from default URDF descriptions based on CAD models. This can be overridden by using TFPublisher class from depthai_bridge, which based on Device's camera calibration data republishes the description with updated information. To enable this behavior in depthai_ros_driver, you can use following parameters:
  • camera.i_publish_tf_from_calibration - setting this to true launches TFPublisher
Then you can set following arguments:
  • camera.i_tf_camera_name - if not set, defaults to the node name
  • camera.i_tf_camera_model - if not set, it will be automatically detected. If the node is unable to detect STL file for the camera it is set to OAK-D. To explicitly set it in camera.launch.py, set override_cam_model:=true
  • camera.i_tf_base_frame
  • camera.i_tf_parent_frame
  • camera.i_tf_cam_pos_x
  • camera.i_tf_cam_pos_y
  • camera.i_tf_cam_pos_z
  • camera.i_tf_cam_roll
  • camera.i_tf_cam_pitch
  • camera.i_tf_cam_yaw
When using camera.launch.py, you can set pass_tf_args_as_params:=true so that TF arguments are used to fill those parameters. For example ros2 launch depthai_ros_driver camera.launch.py pass_tf_args_as_params:=true parent_frame:=map cam_pos_x:=1.0 imu_from_descr:=trueIt is also possible to set custom URDF path (for now only absolute path works) and custom xacro arguments using camera.i_tf_custom_urdf_path and camera.i_tf_custom_xacro_args. Please note that robot_state_publisher must be running.NOTE ON IMU EXTRINSICS If your camera has uncalibrated IMU, a warning will be shown, and IMU will be published with zero rotation and translation. You can override this behavior by setting camera.i_tf_imu_from_descr: true. This will publish default IMU extrinsics from URDF based on camera model.

Available sensors and their resolutions:

  • IMX378 - {"12MP", "4K", "1080P"}, default 1080P
  • OV9282 - {"800P", "720P", "400P"}, default 800P
  • OV9782 - {"800P", "720P", "400P"}, default 800P
  • OV9281 - {"800P", "720P", "400P"}, default 800P
  • IMX214 - {"13MP", "12MP", "4K", "1080P"}, default 1080P
  • IMX412 - {"13MP", "12MP", "4K", "1080P"}, default 1080P
  • OV7750 - {"480P", "400P"}, default 480P
  • OV7251 - {"480P", "400P"}, default 480P
  • IMX477 - {"12MP", "4K", "1080P"}, default 1080P
  • IMX577 - {"12MP", "4K", "1080P"}, default 1080P
  • AR0234 - {"1200P"}, default 1200P
  • IMX582 - {"48MP", "12MP", "4K"}, default 4K
  • LCM48 - {"48MP", "12MP", "4K"}, default 4K

Setting RGB parameters

By default RGB camera outputs ISP frame. To set custom width and height of output image, you can set i_isp_num and i_isp_den which scale image dimensions (2 and 3 by default, so from 1920x1080 to 1280x720), note for RGBD alignment to work resulting width and height must be divisible by 16.Additionally you can set i.output_isp: false to use video output and set custom size using i_width and i_height parameters.

Setting Stereo parameters

Depth alignment

When setting stereo.i_align_depth: true, stereo output is aligned to board socket specified by stereo.i_board_socket_id parameter (by default 0/CAM_A)You can enable rectified Stereo streams by setting, for example in the case of right stream i_publish_right_rect: true. You can also set i_publish_synced_rect_pair: true to get both images with the same timestamps.

Stereo socket order

By default, right camera is treated as first when used for stereo computation, which is reflected in CameraInfo messages. If you want to reverse that logic, set stereo.i_reverse_stereo_socket_order: true (this can be also set for individual sensors).

Custom Sensor sockets

Configuration of which sensors are used for computing stereo pair can be done either programmatically, by specifying them in a constructor of a Stereo node (for example when building a custom pipeline), or via parameters - stereo.i_left_socket_id/stereo.i_right_socket_id. Please note that currently if you want to use rgb/center socket instead of one of the given pairs you will need to build a custom pipeline for that.

Feature Tracker

Each sensor node (and rectified streams from Stereo node) has the option to add FeatureTracker node, which publishes depthai_ros_msgs/msg/TrackedFeatures messages. To enable features on, for example rgb node, set rgb: i_enable_feature_tracker: true. To enable publishing on rectified streams, set for example stereo: i_left_rect_enable_feature_tracker

Setting IMU parameters

Parameters:
  • i_acc_freq: 400 - Accelerometer sensor frequency
  • i_acc_cov: 0.0 - Accelerometer covariance
  • i_batch_report_threshold: 1 - Batch report size
  • i_enable_rotation: false - Whether to enable rotation vector & magnetometer data (available when using IMU_WITH_MAGN/ IMU_WITH_MAGN_SPLIT message type)
  • i_gyro_cov: 0.0 - Gyroscope covariance
  • i_gyro_freq: 400 - Gyroscope frequency
  • i_mag_cov: 0.0 - Magnetometer covariance
  • i_mag_freq: 100 - Magnetometer frequency
  • i_max_batch_reports: 10 - Max reports per batch
  • i_message_type: IMU - ROS publisher type:
  • IMU - sensor_msgs/Imu
  • IMU_WITH_MAG - depthai_ros_msgs/ImuWithMagneticField
  • IMU_WITH_MAG_SPLIT - two publishers - sensor_msgs/Imu & sensor_msgs/MagneticField
  • i_rot_cov: -1.0 - Rotation covariance
  • i_rot_freq: 400 - Rotation frequency
  • i_sync_method: LINEAR_INTERPOLATE_ACCEL - sync method. Available options:
  • COPY
  • LINEAR_INTERPOLATE_GYRO
  • LINEAR_INTERPOLATE_ACCEL
  • i_rotation_vector_type - type of rotation vector, for more information, refer to this link. Available options:
  • ROTATION_VECTOR
  • GAME_ROTATION_VECTOR
  • GEOMAGNETIC_ROTATION_VECTOR
  • ARVR_STABILIZED_ROTATION_VECTOR
  • ARVR_STABILIZED_GAME_ROTATION_VECTOR

Stopping/starting camera for power saving/reconfiguration

Stopping camera also can be used for power saving, as pipeline is removed from the device. Topics are also removed when camera is stopped.There is also an option to enable auto restart if camera encounters an error. To do that, set camera.i_restart_on_diagnostics_error: true

Pipeline parameters

As for the parameters themselves, there are a few crucial ones that decide on how camera behaves.
  • camera_i_pipeline_type can be either
  • RGB - only publishes RGB stream , NN available
  • RGBD - Publishes RGB + Depth streams (set i_publish_topic for left and right cameras to enable them), NN & Spatial NN available
  • Stereo - Publishes streams from left and right sensors, NN can be set for each sensor separately
  • RGBStereo - Publishes RGB + Left + Right streams, NN can be set for each sensor separately
  • Depth - Publishes only depth stream, NN can be set on stereo node via parameter
  • CamArray - Publishes streams for all detected sensors, NN can be set for each sensor separately
  • Tof - Publishes only TOF stream, no NN available
  • DepthTof - Publishes Depth + TOF streams, specific to OAK-D-SR-POE, NN can be set on stereo node via parameter
  • StereoToF - Publishes Stereo + TOF streams, specific to OAK-D-SR-POE, NN can be set on stereo node via parameter
  • RGBToF - Publishes RGB + TOF streams, specific to OAK-D-SR-POE, NN can be set on stereo node via parameter This tells the camera whether it should load stereo components. Default set to RGBD. It is also possible to create a custom pipeline since all types are defined as plugins.
To do that, you can create a custom package (let's say test_plugins), create an executable in that package (test_plugins.cpp). Inside that file, define a custom plugin that inherits from depthai_ros_driver::pipeline_gen::BasePipeline and overrides createPipeline method.After that export plugin, for example:
C
1#include <pluginlib /class_list_macros.hpp>
2  PLUGINLIB_EXPORT_CLASS(test_plugins::Test, depthai_ros_driver::pipeline_gen::BasePipeline)
Add plugin definition:
Xml
1<library path="test_plugins">
2    <class type="test_plugins::Test" base_class_type="depthai_ros_driver::pipeline_gen::BasePipeline">
3      <description>Test Pipeline.</description>
4    </class>
5  </library>
Now you can use created plugin as pipeline, just set camera.i_pipeline_type to test_plugins::Test.

Neural networks

Basic configuration options for cameras with 3 sensors can be controlled by 3 camera parameters:
  • camera.i_nn_type can be either none, rgb or spatial. This is responsible for whether the NN that we load should also take depth information (and for example provide detections in 3D format). Default set to spatial
  • camera.i_mx_id/camera.i_ip/camera.i_usb_port_id are for connecting to a specific camera. If not set, it automatically connects to the next available device. You can get those parameters from logs by running the default launch file.
  • nn.i_nn_config_path represents path to JSON that contains information on what type of NN to load, and what parameters to use. Currently we provide options to load MobileNet, Yolo and Segmentation (not in spatial) models. To see their example configs, navigate to depthai_ros_driver/config/nn. Defaults to mobilenet.json from depthai_ros_driver
To use provided example NN's, you can set the path to:
  • depthai_ros_driver/segmentation
  • depthai_ros_driver/mobilenet
  • depthai_ros_driver/yolo
You can also choose to run NN on a specific sensor by setting sensor_name.i_enable_nn to true. After that, you can similarly set sensor_name_nn.i_nn_config_path to a path to a config file. You can also set up spatial NN for Stereo node and use one of its left/right streams as rgb input. This can be enabled in cameras with RGB side sensors such as OAK-D-SR and OAK-D-LR. Setting up is also similar:
  • stereo.i_enable_spatial_nn - set to true to enable spatial NN
  • stereo.i_spatial_nn_source - set to either left or right to choose which sensor to use as rgb input for NN
  • stereo_nn.i_nn_config_path - path to config file for spatial NN
All available camera-specific parameters and their default values can be seen in depthai_ros_driver/config/camera.yaml.

Syncing

Currently you can sync any number of image topics with a sync node wrapper. To create a sync node, set pipeline_gen: i_enable_sync: true, and, for example rgb.i_synced: true and stereo.i_synced: true. This example configuration is applied when running camera.launch.py with pointcloud.enable:=true argument.

Specific camera configurations

PoE Cameras

Since PoE cameras use protocol that has lower throughput than USB, running default camera launch can result in lags depending on chosen resolution/fps. To combat this issue, you can use encoded frames, which let you keep desired resolution/fps at the cost of image quality reduction due to compression. One additional difference is that subpixel depth filtering is disabled in this mode. To enable low_bandwidth, for example for rgb camera, change parameters:
  • rgb.i_low_bandwidth - true to enable
  • rgb.i_low_bandwidth_quality - desired quality % (default-50) See low_bandwidth.yaml file for example parameters for all streams

ToF Cameras

OAK-D-SR-PoE due to it's specific configuration requires a separate launch file - sr_poe_rgbd_pcl.launch.py. This also utilizes pipeline configuration specific to OAK-D-SR, you can find more information about the pipeline types above. Please note that currently ToF cameras publish images in rgb frame due to sensor socket naming convention.

RealSense compatibility

For quick integration test, you can use camera.launch.py with rs_compat:=true argument to miminc Realsense camera behavior. This will set up camera with RGBD pipeline, and publish topics with names compatible with Realsense camera. TF frames will be also available in Realsense format. Similarly to how RS node works, you can enable pointcloud publishing with pointcloud.enable:=true argument. Please note that the driver will spawn with different name and namespace, as well as different DAI node names so you might need to adjust your config files accordingly. Naming convention is as follows:
  • Namespace: '' -> camera
  • Node name: oak -> camera
  • rgb -> color
  • stereo -> depth
  • 'left' -> infra_2
  • 'right' -> infra_1
  • Frame oak -> camera_link

Example

Let's take look at what changes would be needed for a launch file:
Python
1# Example inclusion of RealSense driver
2rs_camera_launch_include = IncludeLaunchDescription(
3  PythonLaunchDescriptionSource([
4      PathJoinSubstitution([
5          FindPackageShare('realsense2_camera'),
6          'launch',
7          'rs_launch.py',
8      ])
9  ]),
10  launch_arguments={
11      'pointcloud.enable': true,
12  }.items()
13)
Following launch description will run the camera driver in RGBD mode with pointcloud enabled. Now to adjust to use OAK camera, you would need to change the following:
Python
1# Example inclusion of OAK driver
2oak_camera_launch_include = IncludeLaunchDescription(
3  PythonLaunchDescriptionSource([
4      PathJoinSubstitution([
5          FindPackageShare('depthai_ros_driver'),
6          'launch',
7          'camera.launch.py',
8      ])
9  ]),
10  launch_arguments={
11      'rs_compat': true,
12      'pointcloud.enable': true,
13  }.items()
14)
This launches the camera with the same topics, the only change is that the default profile for depth is 1280x720x30 instead of 848x480x30.

RTABMap example with RealSense compatibility

Let's take a look at one of the examples in rtabmap_ros package, realsense_d435i_color.launch.py. Note This example as of this moment (2024-08-27) needs to be adjusted for RS topic names as follows:

Before

Python
1remappings=[
2('imu', '/imu/data'),
3('rgb/image', '/camera/color/image_raw'),
4('rgb/camera_info', '/camera/color/camera_info'),
5('depth/image', '/camera/realigned_depth_to_color/image_raw')]

After

Python
1remappings=[
2('imu', '/imu/data'),
3('rgb/image', '/camera/camera/color/image_raw'),
4('rgb/camera_info', '/camera/camera/color/camera_info'),
5('depth/image', '/camera/camera/depth/image_rect_raw')]
Now to launch the example with RS camera, per instruction you need to run following command:
Command Line
1ros2 launch realsense2_camera rs_launch.py enable_gyro:=true enable_accel:=true unite_imu_method:=1 enable_sync:=true
To launch the same example with OAK camera, you would need to run following command:
Command Line
1ros2 launch depthai_ros_driver camera.launch.py rs_compat:=true
Note that recommended approach for ROS2 nodes is to use components to improve performance and reduce latency. RS camera driver does not provide a dedicated launch file for running camera node in component container (except intra_process_demo), so you would need to create a custom container for that. DepthAI ROS driver's default launch file runs in composition mode by default, so you can add more nodes in runtime to the same container. You can also refer to rtabmap.launch.py example in depthai_ros_driver package for running RTABMap with OAK camera. You can also add slight modifications to also enable RS mode for this example.

Recalibration

If you want to use other calibration values than the ones provided by the device, you can do it in following ways:
  • Use set_camera_info services available for each of the image streams
  • Use i_calibration_file parameter available to point to the calibration file. Note camera name must start with /, so for example /rgb. See depthai_ros_driver/config/calibration for example calibration files. calibration.launch file is provided to start up a ROS camera calibrator node in both monocular and stereo configurations. Calibration file syntax (from camera_info_manager):
Yaml
1- file:///full/path/to/local/file.yaml
2- file:///full/path/to/videre/file.ini
3- package://camera_info_manager/tests/test_calibration.yaml
4- package://ros_package_name/calibrations/camera3.yaml

Using external sources for NN inference or Stereo Depth

There is a possibility of using external image topics as input for NNs or Depth calculation.

Example scenarios

  1. We want to get segmentation or 2D detection output based on images published by usb camera node. This can be achieved by setting rgb.i_simulate_from_topic parameter to true. This creates sensor_msgs/Image subscriber listening by default on /<node_name>/rgb/input topic that passes data into the pipeline on each callback. Topic names can be changed either by classic ROS topic remapping or setting rgb.i_simulated_topic_name to a desired name. By default, original sensor node still runs and publishes data. Setting rgb.i_disable_node to true will prevent it from spawning. Check det2d_usb_cam_overlay.launch.py in `depthai_filters to see it in action.
  2. Calculating depth - both left and right sensor nodes can be setup as in the example above to calculate calculate depth/disparity from external topics. Note that for this to work properly you need specify:
  • left.i_board_socket_id: 1
  • right.i_board_socket_id: 2
  • Default stereo input size is set to 1280x720, in case of different image size, To set custom ones, set stereo.i_set_input_size: true and adjust stereo.i_input_width and stereo.i_input_height accordingly.
  • external calibration file path using camera.i_external_calibration_path parameter. To get calibration from the device you can either set camera.i_calibration_dump to true or call save_calibration service. Calibration will be saved to /tmp/<mx_id>_calibration.json. An example can be seen in stereo_from_rosbag.launch.py in depthai_ros_driver

Developers guide

For easier development inside isolated workspace, one can use Visual Studio Code with DevContainers plugin, to do that:
  • Create separate workspace
  • Clone repository into src
  • Copy .devcontainer directory into main workspace directory
  • Open workspace directory in VSCode

List of parameters

Yaml
1/oak:
2ros__parameters:
3  camera:
4    i_calibration_dump: false
5    i_enable_diagnostics: true
6    i_enable_ir: true
7    i_enable_sync: false
8    i_external_calibration_path: ""
9    i_floodlight_brightness: 0
10    i_ip: ""
11    i_laser_dot_brightness: 800
12    i_mx_id: ""
13    i_nn_type: none
14    i_pipeline_dump: false
15    i_pipeline_type: RGBD
16    i_publish_tf_from_calibration: false
17    i_restart_on_diagnostics_error: false
18    i_rs_compat: false
19    i_tf_base_frame: oak
20    i_tf_cam_pitch: "0.0"
21    i_tf_cam_pos_x: "0.0"
22    i_tf_cam_pos_y: "0.0"
23    i_tf_cam_pos_z: "0.0"
24    i_tf_cam_roll: "0.0"
25    i_tf_cam_yaw: "0.0"
26    i_tf_camera_model: ""
27    i_tf_camera_name: oak
28    i_tf_custom_urdf_location: ""
29    i_tf_custom_xacro_args: ""
30    i_tf_imu_from_descr: "false"
31    i_tf_parent_frame: oak-d-base-frame
32    i_usb_port_id: ""
33    i_usb_speed: SUPER_PLUS
34    diagnostic_updater:
35    period: 1.0
36  pipeline_gen:
37    i_enable_diagnostics: true
38    i_enable_imu: true
39    i_enable_sync: false
40  imu:
41    i_acc_cov: 0.0
42    i_acc_freq: 400
43    i_batch_report_threshold: 5
44    i_enable_rotation: false
45    i_get_base_device_timestamp: false
46    i_gyro_cov: 0.0
47    i_gyro_freq: 400
48    i_mag_cov: 0.0
49    i_max_batch_reports: 10
50    i_message_type: IMU
51    i_rot_cov: -1.0
52    i_sync_method: LINEAR_INTERPOLATE_ACCEL
53    i_update_ros_base_time_on_ros_msg: false
54  left:
55    i_add_exposure_offset: false
56    i_board_socket_id: 1
57    i_calibration_file: ""
58    i_disable_node: false
59    i_enable_feature_tracker: false
60    i_enable_lazy_publisher: true
61    i_enable_nn: false
62    i_exposure_offset: 0
63    i_fps: 30.0
64    i_fsync_continuous: false
65    i_fsync_trigger: false
66    i_get_base_device_timestamp: false
67    i_height: 720
68    i_low_bandiwdth_frame_freq: 30
69    i_low_bandwidth: false
70    i_low_bandwidth_bitrate: 0
71    i_low_bandwidth_ffmpeg_encoder: libx264
72    i_low_bandwidth_profile: 4
73    i_low_bandwidth_quality: 50
74    i_max_q_size: 30
75    i_publish_compressed: false
76    i_publish_topic: false
77    i_resolution: 720P
78    i_reverse_stereo_socket_order: false
79    i_sensor_img_orientation: AUTO
80    i_set_isp3a_fps: false
81    i_simulate_from_topic: false
82    i_simulated_topic_name: ""
83    i_synced: false
84    i_update_ros_base_time_on_ros_msg: false
85    i_width: 1280
86    r_exposure: 1000
87    r_iso: 800
88    r_set_man_exposure: false
89  nn:
90    i_board_socket_id: 0
91    i_disable_resize: false
92    i_enable_passthrough: false
93    i_enable_passthrough_depth: false
94    i_get_base_device_timestamp: false
95    i_num_inference_threads: 2
96    i_num_pool_frames: 4
97    i_update_ros_base_time_on_ros_msg: false
98  rgb:
99    i_add_exposure_offset: false
100    i_board_socket_id: 0
101    i_calibration_file: ""
102    i_disable_node: false
103    i_enable_feature_tracker: false
104    i_enable_lazy_publisher: true
105    i_enable_nn: false
106    i_enable_preview: false
107    i_exposure_offset: 0
108    i_fps: 30.0
109    i_fsync_continuous: false
110    i_fsync_trigger: false
111    i_get_base_device_timestamp: false
112    i_height: 720
113    i_interleaved: false
114    i_isp_den: 3
115    i_isp_num: 2
116    i_keep_preview_aspect_ratio: true
117    i_low_bandiwdth_frame_freq: 30
118    i_low_bandwidth: true
119    i_low_bandwidth_bitrate: 0
120    i_low_bandwidth_ffmpeg_encoder: libx264
121    i_low_bandwidth_profile: 4
122    i_low_bandwidth_quality: 50
123    i_max_q_size: 30
124    i_output_isp: true
125    i_preview_height: 300
126    i_preview_size: 300
127    i_preview_width: 300
128    i_publish_compressed: false
129    i_publish_topic: true
130    i_resolution: 1080P
131    i_reverse_stereo_socket_order: false
132    i_sensor_img_orientation: AUTO
133    i_set_isp3a_fps: false
134    i_set_isp_scale: true
135    i_simulate_from_topic: false
136    i_simulated_topic_name: ""
137    i_synced: false
138    i_update_ros_base_time_on_ros_msg: false
139    i_width: 1280
140    r_exposure: 20000
141    r_focus: 1
142    r_iso: 800
143    r_set_man_exposure: false
144    r_set_man_focus: false
145    r_set_man_whitebalance: false
146    r_whitebalance: 3300
147  right:
148    i_add_exposure_offset: false
149    i_board_socket_id: 2
150    i_calibration_file: ""
151    i_disable_node: false
152    i_enable_feature_tracker: false
153    i_enable_lazy_publisher: true
154    i_enable_nn: false
155    i_exposure_offset: 0
156    i_fps: 30.0
157    i_fsync_continuous: false
158    i_fsync_trigger: false
159    i_get_base_device_timestamp: false
160    i_height: 720
161    i_low_bandiwdth_frame_freq: 30
162    i_low_bandwidth: false
163    i_low_bandwidth_bitrate: 0
164    i_low_bandwidth_ffmpeg_encoder: libx264
165    i_low_bandwidth_profile: 4
166    i_low_bandwidth_quality: 50
167    i_max_q_size: 30
168    i_publish_compressed: false
169    i_publish_topic: false
170    i_resolution: 720P
171    i_reverse_stereo_socket_order: false
172    i_sensor_img_orientation: AUTO
173    i_set_isp3a_fps: false
174    i_simulate_from_topic: false
175    i_simulated_topic_name: ""
176    i_synced: false
177    i_update_ros_base_time_on_ros_msg: false
178    i_width: 1280
179    r_exposure: 1000
180    r_iso: 800
181    r_set_man_exposure: false
182
183  stereo:
184    i_add_exposure_offset: false
185    i_align_depth: true
186    i_bilateral_sigma: 0
187    i_board_socket_id: 0
188    i_depth_filter_size: 5
189    i_depth_preset: HIGH_ACCURACY
190    i_disparity_width: DISPARITY_96
191    i_enable_alpha_scaling: false
192    i_alpha_scaling: 0.0
193    i_enable_brightness_filter: false
194    i_brightness_filter_min_brightness: 0
195    i_brightness_filter_max_brightness: 256
196    i_enable_companding: false
197    i_enable_decimation_filter: false
198    i_decimation_filter_decimation_mode: PIXEL_SKIPPING
199    i_decimation_filter_decimation_factor: 1
200    i_enable_disparity_shift: false
201    i_disparity_shift: 0
202    i_enable_distortion_correction: false
203    i_enable_lazy_publisher: true
204    i_enable_spatial_filter: false
205    i_spatial_filter_hole_filling_radius: 2
206    i_spatial_filter_alpha: 0.5
207    i_spatial_filter_delta: 20
208    i_spatial_filter_iterations: 1
209    i_enable_spatial_nn: false
210    i_enable_speckle_filter: false
211    i_speckle_filter_speckle_range: 50
212    i_enable_temporal_filter: false
213    i_temporal_filter_alpha: 0.4
214    i_temporal_filter_delta: 20
215    i_temporal_filter_persistency: VALID_2_IN_LAST_4
216    i_enable_threshold_filter: false
217    i_threshold_filter_min_range: 400
218    i_threshold_filter_max_range: 15000
219    i_exposure_offset: 0
220    i_extended_disp: false
221    i_get_base_device_timestamp: false
222    i_height: 720
223    i_left_rect_add_exposure_offset: false
224    i_left_rect_enable_feature_tracker: false
225    i_left_rect_exposure_offset: 0
226    i_left_rect_low_bandwidth: false
227    i_left_rect_low_bandwidth_bitrate: 0
228    i_left_rect_low_bandwidth_ffmpeg_encoder: libx264
229    i_left_rect_low_bandwidth_frame_freq: 30
230    i_left_rect_low_bandwidth_profile: 4
231    i_left_rect_low_bandwidth_quality: 50
232    i_left_rect_publish_compressed: false
233    i_left_rect_synced: true
234    i_left_socket_id: 1
235    i_low_bandwidth: false
236    i_low_bandwidth_bitrate: 0
237    i_low_bandwidth_ffmpeg_encoder: libx264
238    i_low_bandwidth_profile: 4
239    i_low_bandwidth_quality: 50
240    i_lr_check: true
241    i_lrc_threshold: 10
242    i_max_q_size: 30
243    i_output_disparity: false
244    i_publish_left_rect: false
245    i_publish_right_rect: false
246    i_publish_synced_rect_pair: false
247    i_publish_topic: true
248    i_rectify_edge_fill_color: 0
249    i_reverse_stereo_socket_order: false
250    i_right_rect_add_exposure_offset: false
251    i_right_rect_enable_feature_tracker: false
252    i_right_rect_exposure_offset: 0
253    i_right_rect_low_bandwidth: false
254    i_right_rect_low_bandwidth_bitrate: 0
255    i_right_rect_low_bandwidth_ffmpeg_encoder: libx264
256    i_right_rect_low_bandwidth_frame_freq: 30
257    i_right_rect_low_bandwidth_profile: 4
258    i_right_rect_low_bandwidth_quality: 50
259    i_right_rect_publish_compressed: false
260    i_right_rect_synced: true
261    i_right_socket_id: 2
262    i_set_disparity_to_depth_use_spec_translation: false
263    i_set_input_size: false
264    i_socket_name: rgb
265    i_spatial_nn_source: right
266    i_stereo_conf_threshold: 240
267    i_subpixel: true
268    i_subpixel_fractional_bits: 3
269    i_synced: false
270    i_update_ros_base_time_on_ros_msg: false
271    i_width: 1280
272    use_sim_time: false
273  tof:
274    i_publish_topic: true
275    i_synced: true
276    i_calibration_file: ""
277    i_low_bandwidth: false # enabling low bandwidth mode might currently not work properly
278    i_low_bandwidth_bitrate: 0
279    i_low_bandwidth_ffmpeg_encoder: libx264
280    i_low_bandwidth_profile: 4
281    i_low_bandwidth_quality: 80
282    i_get_base_device_timestamp: false
283    i_update_ros_base_time_on_ros_msg: false
284    i_enable_lazy_publisher: false
285    i_add_exposure_offset: false
286    i_exposure_offset: 0
287    i_reverse_stereo_socket_order: false
288    i_board_socket_id: 0
289    i_max_q_size: 8
290    i_width: 640
291    i_height: 480
292    i_fps: 30.0
293    i_enable_optical_correction: false
294    i_enable_fppn_correction: false
295    i_enable_temperature_correction: false
296    i_enable_wiggle_correction: false
297    i_enable_phase_unwrapping: false
298    i_enable_phase_shuffle_temporal_filter: true
299    i_phase_unwrapping_level: 4
300    i_phase_unwrap_error_threshold: 100
301    i_median_filter: MEDIAN_OFF