DepthAI Python API
DepthAI Python API
package
depthai
package
depthai.filters
module
params
Parameters for filters
module
depthai.filters.params
class
MedianFilter
Members: MEDIAN_OFF KERNEL_3x3 KERNEL_5x5 KERNEL_7x7
class
class
class
TemporalFilter
Temporal filtering with optional persistence.
class
depthai.filters.params.MedianFilter
variable
variable
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
class
depthai.filters.params.SpatialFilter
method
method
property
alpha
The Alpha factor in an exponential moving average with Alpha=1 - no filter. Alpha = 0 - infinite filter. Determines the amount of smoothing.
method
property
delta
Step-size boundary. Establishes the threshold used to preserve "edges". If the disparity value between neighboring pixels exceed the disparity threshold set by this delta parameter, then filtering will be temporarily disabled. Default value 0 means auto: 3 disparity integer levels. In case of subpixel mode it's 3*number of subpixel levels.
method
property
enable
Whether to enable or disable the filter.
method
property
holeFillingRadius
An in-place heuristic symmetric hole-filling mode applied horizontally during the filter passes. Intended to rectify minor artefacts with minimal performance impact. Search radius for hole filling.
method
property
numIterations
Number of iterations over the image in both horizontal and vertical direction.
method
class
depthai.filters.params.SpeckleFilter
method
method
property
differenceThreshold
Maximum difference between neighbor disparity pixels to put them into the same blob. Units in disparity integer levels.
method
property
enable
Whether to enable or disable the filter.
method
property
speckleRange
Speckle search range.
method
class
depthai.filters.params.TemporalFilter
class
PersistencyMode
Persistency algorithm type. Members: PERSISTENCY_OFF : VALID_8_OUT_OF_8 : VALID_2_IN_LAST_3 : VALID_2_IN_LAST_4 : VALID_2_OUT_OF_8 : VALID_1_IN_LAST_2 : VALID_1_IN_LAST_5 : VALID_1_IN_LAST_8 : PERSISTENCY_INDEFINITELY :
method
method
property
alpha
The Alpha factor in an exponential moving average with Alpha=1 - no filter. Alpha = 0 - infinite filter. Determines the extent of the temporal history that should be averaged.
method
property
delta
Step-size boundary. Establishes the threshold used to preserve surfaces (edges). If the disparity value between neighboring pixels exceed the disparity threshold set by this delta parameter, then filtering will be temporarily disabled. Default value 0 means auto: 3 disparity integer levels. In case of subpixel mode it's 3*number of subpixel levels.
method
property
enable
Whether to enable or disable the filter.
method
property
persistencyMode
Persistency mode. If the current disparity/depth value is invalid, it will be replaced by an older value, based on persistency mode.
method
class
depthai.filters.params.TemporalFilter.PersistencyMode
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
module
depthai.modelzoo
function
getDefaultCachePath() -> os.PathLike: os.PathLikeGet the default cache path (where models are cached)
function
getDefaultModelsPath() -> os.PathLike: os.PathLikeGet the default models path (where yaml files are stored)
function
getDownloadEndpoint() -> str: strGet the download endpoint (for model querying)
function
getHealthEndpoint() -> str: strGet the health endpoint (for internet check)
function
setDefaultCachePath(path: os.PathLike)Set the default cache path (where models are cached) Parameter ``path``:
function
setDefaultModelsPath(path: os.PathLike)Set the default models path (where yaml files are stored) Parameter ``path``:
function
setDownloadEndpoint(endpoint: str)Set the download endpoint (for model querying) Parameter ``endpoint``:
function
setHealthEndpoint(endpoint: str)Set the health endpoint (for internet check) Parameter ``endpoint``:
package
depthai.nn_archive
module
module
depthai.nn_archive.v1
class
Config
The main class of the multi/single-stage model config scheme (multi- stage models consists of interconnected single-stage models). @type config_version: str @ivar config_version: String representing config schema version in format 'x.y' where x is major version and y is minor version @type model: Model @ivar model: A Model object representing the neural network used in the archive.
class
DataType
Data type of the input data (e.g., 'float32'). Represents all existing data types used in i/o streams of the model. Precision of the model weights. Data type of the output data (e.g., 'float32'). Members: BOOLEAN FLOAT16 FLOAT32 FLOAT64 INT4 INT8 INT16 INT32 INT64 UINT4 UINT8 UINT16 UINT32 UINT64 STRING
class
Head
Represents head of a model. @type name: str | None @ivar name: Optional name of the head. @type parser: str @ivar parser: Name of the parser responsible for processing the models output. @type outputs: List[str] | None @ivar outputs: Specify which outputs are fed into the parser. If None, all outputs are fed. @type metadata: C{HeadMetadata} | C{HeadObjectDetectionMetadata} | C{HeadClassificationMetadata} | C{HeadObjectDetectionSSDMetadata} | C{HeadSegmentationMetadata} | C{HeadYOLOMetadata} @ivar metadata: Metadata of the parser.class
Input
Represents input stream of a model. @type name: str @ivar name: Name of the input layer. @type dtype: DataType @ivar dtype: Data type of the input data (e.g., 'float32'). @type input_type: InputType @ivar input_type: Type of input data (e.g., 'image'). @type shape: list @ivar shape: Shape of the input data as a list of integers (e.g. [H,W], [H,W,C], [N,H,W,C], ...). @type layout: str @ivar layout: Lettercode interpretation of the input data dimensions (e.g., 'NCHW'). @type preprocessing: PreprocessingBlock @ivar preprocessing: Preprocessing steps applied to the input data.
class
InputType
Members: IMAGE RAW
class
Metadata
Metadata of the parser. Metadata for the object detection head. @type classes: list @ivar classes: Names of object classes detected by the model. @type n_classes: int @ivar n_classes: Number of object classes detected by the model. @type iou_threshold: float @ivar iou_threshold: Non-max supression threshold limiting boxes intersection. @type conf_threshold: float @ivar conf_threshold: Confidence score threshold above which a detected object is considered valid. @type max_det: int @ivar max_det: Maximum detections per image. @type anchors: list @ivar anchors: Predefined bounding boxes of different sizes and aspect ratios. The innermost lists are length 2 tuples of box sizes. The middle lists are anchors for each output. The outmost lists go from smallest to largest output. Metadata for the classification head. @type classes: list @ivar classes: Names of object classes classified by the model. @type n_classes: int @ivar n_classes: Number of object classes classified by the model. @type is_softmax: bool @ivar is_softmax: True, if output is already softmaxed Metadata for the SSD object detection head. @type boxes_outputs: str @ivar boxes_outputs: Output name corresponding to predicted bounding box coordinates. @type scores_outputs: str @ivar scores_outputs: Output name corresponding to predicted bounding box confidence scores. Metadata for the segmentation head. @type classes: list @ivar classes: Names of object classes segmented by the model. @type n_classes: int @ivar n_classes: Number of object classes segmented by the model. @type is_softmax: bool @ivar is_softmax: True, if output is already softmaxed Metadata for the YOLO head. @type yolo_outputs: list @ivar yolo_outputs: A list of output names for each of the different YOLO grid sizes. @type mask_outputs: list | None @ivar mask_outputs: A list of output names for each mask output. @type protos_outputs: str | None @ivar protos_outputs: Output name for the protos. @type keypoints_outputs: list | None @ivar keypoints_outputs: A list of output names for the keypoints. @type angles_outputs: list | None @ivar angles_outputs: A list of output names for the angles. @type subtype: str @ivar subtype: YOLO family decoding subtype (e.g. yolov5, yolov6, yolov7 etc.) @type n_prototypes: int | None @ivar n_prototypes: Number of prototypes per bbox in YOLO instance segmnetation. @type n_keypoints: int | None @ivar n_keypoints: Number of keypoints per bbox in YOLO keypoint detection. @type is_softmax: bool | None @ivar is_softmax: True, if output is already softmaxed in YOLO instance segmentation Metadata for the basic head. It allows you to specify additional fields. @type postprocessor_path: str | None @ivar postprocessor_path: Path to the postprocessor.
class
MetadataClass
Metadata object defining the model metadata. Represents metadata of a model. @type name: str @ivar name: Name of the model. @type path: str @ivar path: Relative path to the model executable.
class
Model
A Model object representing the neural network used in the archive. Class defining a single-stage model config scheme. @type metadata: Metadata @ivar metadata: Metadata object defining the model metadata. @type inputs: list @ivar inputs: List of Input objects defining the model inputs. @type outputs: list @ivar outputs: List of Output objects defining the model outputs. @type heads: list @ivar heads: List of Head objects defining the model heads. If not defined, we assume a raw output.
class
Output
Represents output stream of a model. @type name: str @ivar name: Name of the output layer. @type dtype: DataType @ivar dtype: Data type of the output data (e.g., 'float32').
class
PreprocessingBlock
Preprocessing steps applied to the input data. Represents preprocessing operations applied to the input data. @type mean: list | None @ivar mean: Mean values in channel order. Order depends on the order in which the model was trained on. @type scale: list | None @ivar scale: Standardization values in channel order. Order depends on the order in which the model was trained on. @type reverse_channels: bool | None @ivar reverse_channels: If True input to the model is RGB else BGR. @type interleaved_to_planar: bool | None @ivar interleaved_to_planar: If True input to the model is interleaved (NHWC) else planar (NCHW). @type dai_type: str | None @ivar dai_type: DepthAI input type which is read by DepthAI to automatically setup the pipeline.
class
depthai.nn_archive.v1.Config
method
property
configVersion
String representing config schema version in format 'x.y' where x is major version and y is minor version.
method
property
model
A Model object representing the neural network used in the archive.
method
class
depthai.nn_archive.v1.DataType
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
class
depthai.nn_archive.v1.Head
class
depthai.nn_archive.v1.Input
method
property
dtype
Data type of the input data (e.g., 'float32').
method
property
inputType
Type of input data (e.g., 'image').
method
property
layout
Lettercode interpretation of the input data dimensions (e.g., 'NCHW')
method
property
name
Name of the input layer.
method
property
preprocessing
Preprocessing steps applied to the input data.
method
property
shape
Shape of the input data as a list of integers (e.g. [H,W], [H,W,C], [N,H,W,C], ...).
method
class
depthai.nn_archive.v1.InputType
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
class
depthai.nn_archive.v1.Metadata
method
property
anchors
Predefined bounding boxes of different sizes and aspect ratios. The innermost lists are length 2 tuples of box sizes. The middle lists are anchors for each output. The outmost lists go from smallest to largest output.
method
property
anglesOutputs
A list of output names for the angles.
method
property
boxesOutputs
Output name corresponding to predicted bounding box coordinates.
method
property
classes
Names of object classes recognized by the model.
method
property
confThreshold
Confidence score threshold above which a detected object is considered valid.
method
property
extraParams
Additional parameters
method
property
iouThreshold
Non-max supression threshold limiting boxes intersection.
method
property
isSoftmax
True, if output is already softmaxed. True, if output is already softmaxed in YOLO instance segmentation.
method
property
keypointsOutputs
A list of output names for the keypoints.
method
property
maskOutputs
A list of output names for each mask output.
method
property
maxDet
Maximum detections per image.
method
property
nClasses
Number of object classes recognized by the model.
method
property
nKeypoints
Number of keypoints per bbox in YOLO keypoint detection.
method
property
nPrototypes
Number of prototypes per bbox in YOLO instance segmnetation.
method
property
postprocessorPath
Path to the postprocessor.
method
property
protosOutputs
Output name for the protos.
method
property
scoresOutputs
Output name corresponding to predicted bounding box confidence scores.
method
property
subtype
YOLO family decoding subtype (e.g. yolov5, yolov6, yolov7 etc.).
method
property
yoloOutputs
A list of output names for each of the different YOLO grid sizes.
method
class
depthai.nn_archive.v1.Model
method
property
heads
List of Head objects defining the model heads. If not defined, we assume a raw output.
method
property
inputs
List of Input objects defining the model inputs.
method
property
metadata
Metadata object defining the model metadata.
method
property
outputs
List of Output objects defining the model outputs.
method
class
depthai.nn_archive.v1.Output
class
depthai.nn_archive.v1.PreprocessingBlock
method
property
daiType
DepthAI input type which is read by DepthAI to automatically setup the pipeline.
method
property
interleavedToPlanar
If True input to the model is interleaved (NHWC) else planar (NCHW).
method
property
mean
Mean values in channel order. Order depends on the order in which the model was trained on.
method
property
reverseChannels
If True input to the model is RGB else BGR.
method
property
scale
Standardization values in channel order. Order depends on the order in which the model was trained on.
method
package
depthai.node
module
class
AprilTag
AprilTag node.
class
class
BasaltVIO
Basalt Visual Inertial Odometry node. Performs VIO on stereo images and IMU data.
class
class
class
class
ColorCamera
ColorCamera node. For use with color sensors.
class
DetectionNetwork
DetectionNetwork, base for different network specializations
class
DetectionParser
DetectionParser node. Parses detection results from Mobilenet-SSD or YOLO neural networks. @note If multiple detection heads are present in the NNArchive, only one type is supported (either YOLO or Mobilenet-SSD) and the last one will be used.
class
class
EdgeDetector
EdgeDetector node. Performs edge detection using 3x3 Sobel filter
class
FeatureTracker
FeatureTracker node. Performs feature tracking and reidentification using motion estimation between 2 consecutive frames.
class
Gate
Gate Node. This node acts as a valve for data pipelines. It controls the flow of messages from the 'input' to the 'output' based on the state configured via 'inputControl'. It can be configured to stay open indefinitely, stay closed, or open for a specific number of messages.
class
class
IMU
IMU node for BNO08X.
class
ImageAlign
ImageAlign node. Calculates spatial location data on a set of ROIs on depth map.
class
class
ImageManip
ImageManip node. Capability to crop, resize, warp, ... incoming image frames
class
class
MonoCamera
MonoCamera node. For use with grayscale sensors.
class
NeuralAssistedStereo
NeuralAssistedStereo node. Combines Neural Depth with VPP and traditional Stereo Depth. This composite node internally creates and connects: - Rectification node (full resolution) - NeuralDepth node (low resolution depth estimation) - VPP node (applies virtual projection pattern) - StereoDepth node (final depth computation on VPP-enhanced images) Pipeline structure: Left/Right Cameras → Rectification → [Full res to VPP] ↓ NeuralDepth (low res) → [disparity + confidence to VPP] ↓ VPP (combines neural depth with full res images) ↓ StereoDepth → Final Depth Output
class
NeuralDepth
NeuralDepth node. Compute depth from left-right image pair using neural network.
class
NeuralNetwork
NeuralNetwork node. Runs a neural inference on input data.
class
ObjectTracker
ObjectTracker node. Performs object tracking using Kalman filter and hungarian algorithm.
class
PointCloud
PointCloud node. Computes point cloud from depth frames.
class
RGBD
RGBD node. Combines depth and color frames into a single point cloud.
class
RTABMapSLAM
RTABMap SLAM node. Performs SLAM on given odometry pose, rectified frame and depth frame.
class
RTABMapVIO
RTABMap Visual Inertial Odometry node. Performs VIO on rectified frame, depth frame and IMU data.
class
RecordMetadataOnly
RecordMetadataOnly node, used to record a source stream to a file
class
RecordVideo
RecordVideo node, used to record a video source stream to a file
class
class
ReplayMetadataOnly
Replay node, used to replay a file to a source node
class
ReplayVideo
Replay node, used to replay a file to a source node
class
SPIIn
SPIIn node. Receives messages over SPI.
class
SPIOut
SPIOut node. Sends messages over SPI.
class
class
SegmentationParser
SegmentationParser node. Parses raw segmentation output from segmentation neural networks into a dai::SegmentationMask datatype. The parser supports two output model types: 1. Single-channel output where the model argmaxes the class probabilities internally and outputs a single channel mask with class indices. 2. Multi-channel output where each channel corresponds to the probability map for a specific class. The parser will perform argmax across channels to generate the final mask. The parser can be configured to treat the first class (index 0) as the background class, which will be ignored in the final segmentation mask. .. warning:: Only OAK4 supports running SegmentationParser on device. On other platforms, the node will automatically switch to host execution.
class
SpatialDetectionNetwork
SpatialDetectionNetwork node. Runs a neural inference on input image and calculates spatial location data.
class
SpatialLocationCalculator
SpatialLocationCalculator node. Calculates the spatial locations of detected objects based on the input depth map. Spatial location calculations can be additionally refined by using a segmentation mask. If keypoints are provided, the spatial location is calculated around each keypoint.
class
StereoDepth
StereoDepth node. Compute stereo disparity and depth from left-right image pair.
class
Sync
Sync node. Performs syncing between image frames
class
SystemLogger
SystemLogger node. Send system information periodically.
class
Thermal
Thermal node.
class
class
class
ToFBase
ToFBase node. Performs feature tracking and reidentification using motion estimation between 2 consecutive frames.
class
ToFDepthConfidenceFilter
Node for depth confidence filter, designed to be used with the `ToF` node.
class
UVC
UVC (USB Video Class) node
class
VideoEncoder
VideoEncoder node. Encodes frames into MJPEG, H264 or H265.
class
Vpp
Vpp node. Apply Virtual Projection Pattern algorithm to stereo images based on disparity.
class
Warp
Warp node. Capability to crop, resize, warp, ... incoming image frames
class
depthai.node.AprilTag(depthai.DeviceNode)
method
getNumThreads(self) -> int: intGet number of threads to use for AprilTag detection. Returns: Number of threads to use.
method
getWaitForConfigInput(self) -> bool: boolGet whether or not wait until configuration message arrives to inputConfig Input.
method
runOnHost(self) -> bool: boolCheck if the node is set to run on host
method
setNumThreads(self, numThreads: int)Set number of threads to use for AprilTag detection. Parameter ``numThreads``: Number of threads to use.
method
setRunOnHost(self, arg0: bool)Specify whether to run on host or device By default, the node will run on device.
method
setWaitForConfigInput(self, wait: bool)Specify whether or not wait until configuration message arrives to inputConfig Input. Parameter ``wait``: True to wait for configuration message, false otherwise.
property
initialConfig
Initial config to use when calculating spatial location data.
property
inputConfig
Input AprilTagConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
inputImage
Input message with depth data used to retrieve spatial information about detected object. Default queue is non-blocking with size 4.
property
out
Outputs AprilTags message that carries spatial location results.
property
passthroughInputImage
Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
class
depthai.node.AutoCalibration(depthai.DeviceNode)
class
depthai.node.BasaltVIO(depthai.node.ThreadedHostNode)
method
method
method
method
method
method
method
method
method
method
property
imu
Input IMU data.
property
property
passthrough
Output passthrough of left image.
property
property
transform
Output transform data.
class
depthai.node.BenchmarkIn(depthai.DeviceNode)
method
logReportsAsWarnings(self, logReportsAsWarnings: bool)Log the reports as warnings
method
measureIndividualLatencies(self, attachLatencies: bool)Attach latencies to the report
method
sendReportEveryNMessages(self, num: int)Specify how many messages to measure for each report
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
property
input
Receive messages as fast as possible
property
passthrough
Passthrough for input messages (so the node can be placed between other nodes)
property
report
Send a benchmark report when the set number of messages are received
class
depthai.node.BenchmarkOut(depthai.DeviceNode)
method
setFps(self, fps: float)Set FPS at which the node is sending out messages. 0 means as fast as possible
method
setNumMessagesToSend(self, num: int)Sets number of messages to send, by default send messages indefinitely Parameter ``num``: number of messages to send
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
property
input
Message that will be sent repeatedly
property
out
Send messages out as fast as possible
class
depthai.node.Camera(depthai.DeviceNode)
method
method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocketRetrieves which board socket to use Returns: Board socket to use
method
getImageOrientation(self) -> depthai.CameraImageOrientation: depthai.CameraImageOrientationGet camera image orientation Returns: Image orientation
method
getIspNumFramesPool(self) -> int: intGet number of frames in isp pool Returns: Number of frames
method
getMaxSizePoolIsp(self) -> int: intGet maximum size of isp pool Returns: Maximum size in bytes of isp pool
method
getMaxSizePoolRaw(self) -> int: intGet maximum size of raw pool Returns: Maximum size in bytes of raw pool
method
getOutputsMaxSizePool(self) -> int|None: int|NoneGet maximum size of outputs pool for all outputs Returns: Maximum size in bytes of image manip pool
method
getOutputsNumFramesPool(self) -> int|None: int|NoneGet number of frames in outputs pool for all outputs Returns: Number of frames
method
getRawNumFramesPool(self) -> int: intGet number of frames in raw pool Returns: Number of frames
method
getSensorType(self) -> depthai.CameraSensorType: depthai.CameraSensorTypeGet the sensor type Returns: Sensor type
method
requestFullResolutionOutput(self, type: depthai.ImgFrame.Type
|
None = None, fps: float
|
None = None, useHighestResolution: bool = False) -> depthai.Node.Output: depthai.Node.OutputGet a high resolution output with full FOV on the sensor. By default the function will not use the resolutions higher than 5000x4000, as those often need a lot of resources, making them hard to use in combination with other nodes. Parameter ``type``: Type of the output (NV12, BGR, ...) - by default it's auto-selected for best performance Parameter ``fps``: FPS of the output - by default it's auto-selected to highest possible that a sensor config support or 30, whichever is lower Parameter ``useHighestResolution``: If true, the function will use the highest resolution available on the sensor, even if it's higher than 5000x4000
method
requestIspOutput(self, fps: float
|
None = None) -> depthai.Node.Output: depthai.Node.OutputRequest output with isp resolution. The fps does not vote.
method
method
setImageOrientation(self, imageOrientation: depthai.CameraImageOrientation) -> Camera: CameraSet camera image orientation Parameter ``imageOrientation``: Image orientation to set Returns: Shared pointer to the camera node
method
setIspNumFramesPool(self, num: int) -> Camera: CameraSet number of frames in isp pool (will be automatically reduced if the maximum pool memory size is exceeded) Parameter ``num``: Number of frames Returns: Shared pointer to the camera node
method
setMaxSizePoolIsp(self, size: int) -> Camera: CameraSet maximum size of isp pool Parameter ``size``: Maximum size in bytes of isp pool Returns: Shared pointer to the camera node
method
setMaxSizePoolRaw(self, size: int) -> Camera: CameraSet maximum size of raw pool Parameter ``size``: Maximum size in bytes of raw pool Returns: Shared pointer to the camera node
method
setMaxSizePools(self, raw: int, isp: int, imgmanip: int) -> Camera: CameraSet maximum memory size of all pools Parameter ``raw``: Maximum size in bytes of raw pool Parameter ``isp``: Maximum size in bytes of isp pool Parameter ``outputs``: Maximum size in bytes of outputs pools Returns: Shared pointer to the camera node
method
setMockIsp(self, mockIsp: ReplayVideo) -> Camera: CameraSet mock ISP for Camera node. Automatically sets mockIsp size. Parameter ``replay``: ReplayVideo node to use as mock ISP
method
setNumFramesPools(self, raw: int, isp: int, imgmanip: int) -> Camera: CameraSet number of frames in all pools (will be automatically reduced if the maximum pool memory size is exceeded) Parameter ``raw``: Number of frames in raw pool Parameter ``isp``: Number of frames in isp pool Parameter ``outputs``: Number of frames in outputs pools Returns: Shared pointer to the camera node
method
setOutputsMaxSizePool(self, size: int) -> Camera: CameraSet maximum size of pools for all outputs Parameter ``size``: Maximum size in bytes of pools for all outputs Returns: Shared pointer to the camera node
method
setOutputsNumFramesPool(self, num: int) -> Camera: CameraSet number of frames in pools for all outputs Parameter ``num``: Number of frames in pools for all outputs Returns: Shared pointer to the camera node
method
setRawNumFramesPool(self, num: int) -> Camera: CameraSet number of frames in raw pool (will be automatically reduced if the maximum pool memory size is exceeded) Parameter ``num``: Number of frames Returns: Shared pointer to the camera node
method
setSensorType(self, sensorType: depthai.CameraSensorType) -> Camera: CameraSet the sensor type to use Parameter ``sensorType``: Sensor type to use
property
initialControl
Initial control options to apply to sensor
property
inputControl
Input for CameraControl message, which can modify camera parameters in runtime
property
mockIsp
Input for mocking 'isp' functionality on RVC2. Default queue is blocking with size 8
property
raw
Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data. Captured directly from the camera sensor, and the source for the 'isp' output.
class
depthai.node.ColorCamera(depthai.DeviceNode)
method
method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocketRetrieves which board socket to use Returns: Board socket to use
method
method
getCamera(self) -> str: strRetrieves which camera to use by name Returns: Name of the camera to use
method
getColorOrder(self) -> depthai.ColorCameraProperties.ColorOrder: depthai.ColorCameraProperties.ColorOrderGet color order of preview output frames. RGB or BGR
method
getFp16(self) -> bool: boolGet fp16 (0..255) data of preview output frames
method
getFps(self) -> float: floatGet rate at which camera should produce frames Returns: Rate in frames per second
method
method
getImageOrientation(self) -> depthai.CameraImageOrientation: depthai.CameraImageOrientationGet camera image orientation
method
getInterleaved(self) -> bool: boolGet planar or interleaved data of preview output frames
method
getIspHeight(self) -> int: intGet 'isp' output height
method
getIspNumFramesPool(self) -> int: intGet number of frames in isp pool
method
getIspSize(self) -> tuple[int, int]: tuple[int, int]Get 'isp' output resolution as size, after scaling
method
getIspWidth(self) -> int: intGet 'isp' output width
method
getPreviewHeight(self) -> int: intGet preview height
method
getPreviewKeepAspectRatio(self) -> bool: boolSee also: setPreviewKeepAspectRatio Returns: Preview keep aspect ratio option
method
getPreviewNumFramesPool(self) -> int: intGet number of frames in preview pool
method
getPreviewSize(self) -> tuple[int, int]: tuple[int, int]Get preview size as tuple
method
getPreviewWidth(self) -> int: intGet preview width
method
getRawNumFramesPool(self) -> int: intGet number of frames in raw pool
method
getResolution(self) -> depthai.ColorCameraProperties.SensorResolution: depthai.ColorCameraProperties.SensorResolutionGet sensor resolution
method
getResolutionHeight(self) -> int: intGet sensor resolution height
method
getResolutionSize(self) -> tuple[int, int]: tuple[int, int]Get sensor resolution as size
method
getResolutionWidth(self) -> int: intGet sensor resolution width
method
getSensorCrop(self) -> tuple[float, float]: tuple[float, float]Returns: Sensor top left crop coordinates
method
getSensorCropX(self) -> float: floatGet sensor top left x crop coordinate
method
getSensorCropY(self) -> float: floatGet sensor top left y crop coordinate
method
getStillHeight(self) -> int: intGet still height
method
getStillNumFramesPool(self) -> int: intGet number of frames in still pool
method
getStillSize(self) -> tuple[int, int]: tuple[int, int]Get still size as tuple
method
getStillWidth(self) -> int: intGet still width
method
getVideoHeight(self) -> int: intGet video height
method
getVideoNumFramesPool(self) -> int: intGet number of frames in video pool
method
getVideoSize(self) -> tuple[int, int]: tuple[int, int]Get video size as tuple
method
getVideoWidth(self) -> int: intGet video width
method
sensorCenterCrop(self)Specify sensor center crop. Resolution size / video size
method
setBoardSocket(self, boardSocket: depthai.CameraBoardSocket)Specify which board socket to use Parameter ``boardSocket``: Board socket to use
method
method
setCamera(self, name: str)Specify which camera to use by name Parameter ``name``: Name of the camera to use
method
setColorOrder(self, colorOrder: depthai.ColorCameraProperties.ColorOrder)Set color order of preview output images. RGB or BGR
method
setFp16(self, fp16: bool)Set fp16 (0..255) data type of preview output frames
method
setFps(self, fps: float)Set rate at which camera should produce frames Parameter ``fps``: Rate in frames per second
method
method
setImageOrientation(self, imageOrientation: depthai.CameraImageOrientation)Set camera image orientation
method
setInterleaved(self, interleaved: bool)Set planar or interleaved data of preview output frames
method
setIsp3aFps(self, arg0: int)Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.
method
setIspNumFramesPool(self, arg0: int)Set number of frames in isp pool
method
method
setNumFramesPool(self, raw: int, isp: int, preview: int, video: int, still: int)Set number of frames in all pools
method
setPreviewKeepAspectRatio(self, keep: bool)Specifies whether preview output should preserve aspect ratio, after downscaling from video size or not. Parameter ``keep``: If true, a larger crop region will be considered to still be able to create the final image in the specified aspect ratio. Otherwise video size is resized to fit preview size
method
setPreviewNumFramesPool(self, arg0: int)Set number of frames in preview pool
method
method
setRawNumFramesPool(self, arg0: int)Set number of frames in raw pool
method
setRawOutputPacked(self, packed: bool)Configures whether the camera `raw` frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (`false`), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000'00pp'pppp'pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.
method
setResolution(self, resolution: depthai.ColorCameraProperties.SensorResolution)Set sensor resolution
method
setSensorCrop(self, x: float, y: float)Specifies the cropping that happens when converting ISP to video output. By default, video will be center cropped from the ISP output. Note that this doesn't actually do on-sensor cropping (and MIPI-stream only that region), but it does postprocessing on the ISP (on RVC). Parameter ``x``: Top left X coordinate Parameter ``y``: Top left Y coordinate
method
setStillNumFramesPool(self, arg0: int)Set number of frames in preview pool
method
method
setVideoNumFramesPool(self, arg0: int)Set number of frames in preview pool
method
property
frameEvent
Outputs metadata-only ImgFrame message as an early indicator of an incoming frame. It's sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp
property
initialControl
Initial control options to apply to sensor
property
inputControl
Input for CameraControl message, which can modify camera parameters in runtime
property
isp
Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data. Generated by the ISP engine, and the source for the 'video', 'preview' and 'still' outputs
property
preview
Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame data. Suitable for use with NeuralNetwork node
property
raw
Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data. Captured directly from the camera sensor, and the source for the 'isp' output.
property
still
Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data. The message is sent only when a CameraControl message arrives to inputControl with captureStill command set.
property
video
Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data. Suitable for use with VideoEncoder node
class
depthai.node.DetectionNetwork(depthai.DeviceNodeGroup)
class
method
method
method
method
getConfidenceThreshold(self) -> float: floatRetrieves threshold at which to filter the rest of the detections. Returns: Detection confidence
method
getNumInferenceThreads(self) -> int: intHow many inference threads will be used to run the network Returns: Number of threads, 0, 1 or 2. Zero means AUTO
method
setBackend(self, setBackend: str)Specifies backend to use Parameter ``backend``: String specifying backend to use
method
setBackendProperties(self, setBackendProperties: dict
[
str
,
str
])Set backend properties Parameter ``backendProperties``: backend properties map
method
method
setBlobPath(self, path: os.PathLike)Load network blob into assets and use once pipeline is started. Throws: Error if file doesn't exist or isn't a valid network blob. Parameter ``path``: Path to network blob
method
setConfidenceThreshold(self, thresh: float)Specifies confidence threshold at which to filter the rest of the detections. Parameter ``thresh``: Detection confidence must be greater than specified threshold to be added to the list
method
setFromModelZoo(self, description: depthai.NNModelDescription, useCached: bool = False)Download model from zoo and set it for this Node Parameter ``description:``: Model description to download Parameter ``useCached:``: Use cached model if available
method
setModelPath(self, modelPath: os.PathLike)Load network model into assets. Parameter ``modelPath``: Path to the model file.
method
method
setNumInferenceThreads(self, numThreads: int)How many threads should the node use to run the network. Parameter ``numThreads``: Number of threads to dedicate to this node
method
setNumNCEPerInferenceThread(self, numNCEPerThread: int)How many Neural Compute Engines should a single thread use for inference Parameter ``numNCEPerThread``: Number of NCE per thread
method
setNumPoolFrames(self, numFrames: int)Specifies how many frames will be available in the pool Parameter ``numFrames``: How many frames will pool have
method
setNumShavesPerInferenceThread(self, numShavesPerInferenceThread: int)How many Shaves should a single thread use for inference Parameter ``numShavesPerThread``: Number of shaves per thread
property
property
input
Input message with data to be inferred upon
property
property
out
Outputs ImgDetections message that carries parsed detection results. Overrides NeuralNetwork 'out' with ImgDetections output message type.
property
outNetwork
Outputs unparsed inference results.
property
passthrough
Passthrough message on which the inference was performed. Suitable for when input queue is set to non-blocking behavior.
class
depthai.node.DetectionNetwork.Model
method
class
depthai.node.DetectionParser(depthai.DeviceNode)
method
build(self, input: depthai.Node.Output, nnArchive: depthai.NNArchive) -> DetectionParser: DetectionParserBuild DetectionParser node. Connect output to this node's input. Also call setNNArchive() with provided NNArchive. Parameter ``nnInput:``: Output to link Parameter ``nnArchive:``: Neural network archive
method
getAnchorMasks(self) -> dict[str, list[int]]: dict[str, list[int]]Get anchor masks for anchor-based yolo models
method
getAnchors(self) -> list[float]: list[float]Get anchors for anchor-based yolo models
method
getClasses(self) -> list[str]|None: list[str]|NoneGet class names to decode.
method
getConfidenceThreshold(self) -> float: floatRetrieves threshold at which to filter the rest of the detections. Returns: Detection confidence
method
getCoordinateSize(self) -> int: intGet number of coordinates per bounding box.
method
getDecodeKeypoints(self) -> bool: boolGet whether keypoints decoding is enabled.
method
getDecodeSegmentation(self) -> bool: boolGet whether segmentation mask decoding is enabled.
method
getIouThreshold(self) -> float: floatGet IOU threshold for non-maxima suppression
method
getNNFamily(self) -> depthai.DetectionNetworkType: depthai.DetectionNetworkTypeGets NN Family to parse
method
getNkeypoints(self) -> int: intGet number of keypoints to decode.
method
getNumClasses(self) -> int: intGet number of classes to decode.
method
getNumFramesPool(self) -> int: intReturns number of frames in pool
method
getStrides(self) -> list[int]: list[int]Get strides for yolo models
method
getSubtype(self) -> str: strGet subtype for the parser.
method
runOnHost(self) -> bool: boolCheck if the node is set to run on host
method
setAnchorMasks(self, anchorMasks: dict
[
str
,
list
[
int
]
])Set anchor masks for anchor-based yolo models Parameter ``anchorMasks``: Map of anchor masks
method
method
method
setBlobPath(self, path: os.PathLike)Load network blob into assets and use once pipeline is started. Throws: Error if file doesn't exist or isn't a valid network blob. Parameter ``path``: Path to network blob
method
setClasses(self, classes: list
[
str
])Set class names. This will clear any previously set number of classes. Parameter ``classes``: Vector of class names
method
setConfidenceThreshold(self, thresh: float)Specifies confidence threshold at which to filter the rest of the detections. Parameter ``thresh``: Detection confidence must be greater than specified threshold to be added to the list
method
setCoordinateSize(self, coordinates: int)Sets the number of coordinates per bounding box. Parameter ``coordinates``: Number of coordinates. Default is 4
method
setDecodeKeypoints(self, decode: bool)Enable/disable keypoints decoding. If enabled, number of keypoints must also be set.
method
setDecodeSegmentation(self, decode: bool)Enable/disable segmentation mask decoding.
method
method
setIouThreshold(self, thresh: float)Set IOU threshold for non-maxima suppression Parameter ``thresh``: IOU threshold
method
setKeypointEdges(self, edges: list
[
typing.Annotated
[
list
[
int
]
,
pybind11_stubgen.typing_ext.FixedSize
(
2
)
]
])Set edges connections between keypoints. Parameter ``edges``: Vector edges connections represented as pairs of keypoint indices. @note This is only applicable if keypoints decoding is enabled.
method
setNNArchive(self, nnArchive: depthai.NNArchive)Set NNArchive for this Node. If the archive's type is SUPERBLOB, use default number of shaves. Parameter ``nnArchive:``: NNArchive to set
method
setNNFamily(self, type: depthai.DetectionNetworkType)Sets NN Family to parse. Possible values are: DetectionNetworkType::YOLO - 0 DetectionNetworkType::MOBILENET - 1 .. warning:: If NN Family is set manually, user must ensure that it matches the actual model being used.
method
setNumClasses(self, numClasses: int)Set number of classes. This will clear any previously set class names. Parameter ``numClasses``: Number of classes
method
setNumFramesPool(self, numFramesPool: int)Specify number of frames in pool. Parameter ``numFramesPool``: How many frames should the pool have
method
setNumKeypoints(self, numKeypoints: int)Set number of keypoints to decode. Automatically enables keypoints decoding.
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
method
setStrides(self, strides: list
[
int
])Set strides for yolo models
method
setSubtype(self, subtype: str)Set subtype for the parser. Parameter ``subtype``: Subtype string, currently supported subtypes are: yolov6r1, yolov6r2 yolov8n, yolov6, yolov8, yolov10, yolov11, yolov3, yolov3-tiny, yolov5, yolov7, yolo-p, yolov5-u
property
input
Input NN results with detection data to parse Default queue is blocking with size 5
property
out
Outputs image frame with detected edges
class
depthai.node.DynamicCalibration(depthai.DeviceNode)
method
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on host on RVC2 and on device on RVC4.
property
calibrationOutput
Output calibration quality result
property
property
inputControl
Input DynamicCalibrationControl message with ability to modify parameters in runtime.
property
property
property
property
class
depthai.node.EdgeDetector(depthai.DeviceNode)
method
setMaxOutputFrameSize(self, arg0: int)Specify maximum size of output image. Parameter ``maxFrameSize``: Maximum frame size in bytes
method
setNumFramesPool(self, arg0: int)Specify number of frames in pool. Parameter ``numFramesPool``: How many frames should the pool have
property
initialConfig
Initial config to use for edge detection.
property
inputConfig
Input EdgeDetectorConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
inputImage
Input image on which edge detection is performed. Default queue is non-blocking with size 4.
property
outputImage
Outputs image frame with detected edges
class
depthai.node.FeatureTracker(depthai.DeviceNode)
method
setHardwareResources(self, numShaves: int, numMemorySlices: int)Specify allocated hardware resources for feature tracking. 2 shaves/memory slices are required for optical flow, 1 for corner detection only. Parameter ``numShaves``: Number of shaves. Maximum 2. Parameter ``numMemorySlices``: Number of memory slices. Maximum 2.
property
initialConfig
Initial config to use for feature tracking.
property
inputConfig
Input FeatureTrackerConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
inputImage
Input message with frame data on which feature tracking is performed. Default queue is non-blocking with size 4.
property
outputFeatures
Outputs TrackedFeatures message that carries tracked features results.
property
passthroughInputImage
Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
class
depthai.node.Gate(depthai.DeviceNode)
method
runOnHost(self) -> bool: boolCheck if the node is configured to run on the host. Returns: true if running on host, false otherwise.
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
property
initialConfig
Initial config of the node.
method
property
input
Main data input. * Accepts arbitrary Buffer messages (e.g., ImgFrame, NNData). If the Gate is Open, messages received here are forwarded to 'output'. If the Gate is Closed, messages received here are discarded/dropped. * Default queue size: 1 Blocking: False
property
inputControl
Control input. * Accepts 'GateControl' messages to dynamically change the Gate's state. Use this to Open/Close the gate or set it to pass a specific number of frames at runtime. * Default queue size: 4
property
output
Main data output. * Forwards messages that were allowed through the Gate. The data type matches the input message.
class
depthai.node.HostNode(depthai.node.ThreadedHostNode)
CLASS_METHOD
method
method
method
method
method
method
method
method
sendProcessingToPipeline(self, arg0: bool)Send processing to pipeline. If set to true, it's important to call `pipeline.run()` in the main thread or `pipeline.processTasks()` in the main thread. Otherwise, if set to false, such action is not needed.
property
property
class
depthai.node.IMU(depthai.DeviceNode)
method
enableFirmwareUpdate(self, arg0: bool)Whether to perform firmware update or not. Default value: false.
method
method
getBatchReportThreshold(self) -> int: intAbove this packet threshold data will be sent to host, if queue is not blocked
method
getMaxBatchReports(self) -> int: intMaximum number of IMU packets in a batch report
method
setBatchReportThreshold(self, batchReportThreshold: int)Above this packet threshold data will be sent to host, if queue is not blocked
method
setMaxBatchReports(self, maxBatchReports: int)Maximum number of IMU packets in a batch report
property
mockIn
Mock IMU data for replaying recorded data
property
out
Outputs IMUData message that carries IMU packets.
class
depthai.node.ImageAlign(depthai.DeviceNode)
method
runOnHost(self) -> bool: boolCheck if the node is set to run on host
method
setInterpolation(self, interp: depthai.Interpolation) -> ImageAlign: ImageAlignSpecify interpolation method to use when resizing
method
setNumFramesPool(self, numFramesPool: int) -> ImageAlign: ImageAlignSpecify number of frames in the pool
method
setNumShaves(self, numShaves: int) -> ImageAlign: ImageAlignSpecify number of shaves to use for this node
method
setOutKeepAspectRatio(self, keep: bool) -> ImageAlign: ImageAlignSpecify whether to keep aspect ratio when resizing
method
setOutputSize(self, alignWidth: int, alignHeight: int) -> ImageAlign: ImageAlignSpecify the output size of the aligned image
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
property
initialConfig
Initial config to use when calculating spatial location data.
property
input
Input message. Default queue is non-blocking with size 4.
property
inputAlignTo
Input align to message. Default queue is non-blocking with size 1.
property
inputConfig
Input message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
outputAligned
Outputs ImgFrame message that is aligned to inputAlignTo.
property
passthroughInput
Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
class
depthai.node.ImageFilters(depthai.DeviceNode)
method
method
runOnHost(self) -> bool: boolCheck if the node is set to run on host
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
property
initialConfig
Initial config for image filters.
property
input
Input for image frames to be filtered
property
inputConfig
Config to be set for a specific filter
property
output
Filtered frame
class
depthai.node.ImageManip(depthai.DeviceNode)
class
Backend
Members: HW CPU
class
PerformanceMode
Members: BALANCED PERFORMANCE LOW_POWER
method
setBackend(self, arg0: ImageManip.Backend) -> ImageManip: ImageManipSet CPU as backend preference Parameter ``backend``: Backend preference
method
setMaxOutputFrameSize(self, arg0: int)Specify maximum size of output image. Parameter ``maxFrameSize``: Maximum frame size in bytes
method
setNumFramesPool(self, arg0: int)Specify number of frames in pool. Parameter ``numFramesPool``: How many frames should the pool have
method
setPerformanceMode(self, arg0: ImageManip.PerformanceMode) -> ImageManip: ImageManipSet performance mode Parameter ``performanceMode``: Performance mode
method
setRunOnHost(self, arg0: bool) -> ImageManip: ImageManipSpecify whether to run on host or device Parameter ``runOnHost``: Run node on host
property
initialConfig
Initial config to use when manipulating frames
property
inputConfig
Input ImageManipConfig message with ability to modify parameters in runtime
property
inputImage
Input image to be modified
property
class
depthai.node.ImageManip.Backend
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
class
depthai.node.ImageManip.PerformanceMode
variable
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
class
depthai.node.MessageDemux(depthai.DeviceNode)
method
getProcessor(self) -> depthai.ProcessorType: depthai.ProcessorTypeGet on which processor the node should run Returns: Processor type - Leon CSS or Leon MSS
method
setProcessor(self, arg0: depthai.ProcessorType)Specify on which processor the node should run. RVC2 only. Parameter ``type``: Processor type - Leon CSS or Leon MSS
property
input
Input message of type MessageGroup
property
outputs
A map of outputs, where keys are same as in the input MessageGroup
class
depthai.node.MonoCamera(depthai.DeviceNode)
method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocketRetrieves which board socket to use Returns: Board socket to use
method
method
getCamera(self) -> str: strRetrieves which camera to use by name Returns: Name of the camera to use
method
getFps(self) -> float: floatGet rate at which camera should produce frames Returns: Rate in frames per second
method
method
getImageOrientation(self) -> depthai.CameraImageOrientation: depthai.CameraImageOrientationGet camera image orientation
method
getNumFramesPool(self) -> int: intGet number of frames in main (ISP output) pool
method
getRawNumFramesPool(self) -> int: intGet number of frames in raw pool
method
getResolution(self) -> depthai.MonoCameraProperties.SensorResolution: depthai.MonoCameraProperties.SensorResolutionGet sensor resolution
method
getResolutionHeight(self) -> int: intGet sensor resolution height
method
getResolutionSize(self) -> tuple[int, int]: tuple[int, int]Get sensor resolution as size
method
getResolutionWidth(self) -> int: intGet sensor resolution width
method
setBoardSocket(self, boardSocket: depthai.CameraBoardSocket)Specify which board socket to use Parameter ``boardSocket``: Board socket to use
method
method
setCamera(self, name: str)Specify which camera to use by name Parameter ``name``: Name of the camera to use
method
setFps(self, fps: float)Set rate at which camera should produce frames Parameter ``fps``: Rate in frames per second
method
method
setImageOrientation(self, imageOrientation: depthai.CameraImageOrientation)Set camera image orientation
method
setIsp3aFps(self, arg0: int)Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.
method
setNumFramesPool(self, arg0: int)Set number of frames in main (ISP output) pool
method
setRawNumFramesPool(self, arg0: int)Set number of frames in raw pool
method
setRawOutputPacked(self, packed: bool)Configures whether the camera `raw` frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (`false`), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000'00pp'pppp'pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.
method
setResolution(self, resolution: depthai.MonoCameraProperties.SensorResolution)Set sensor resolution
property
property
initialControl
Initial control options to apply to sensor
property
property
property
class
depthai.node.NeuralAssistedStereo(depthai.DeviceNode)
method
property
property
property
property
property
property
property
property
property
property
property
property
property
property
property
property
property
class
depthai.node.NeuralDepth(depthai.DeviceNode)
static method
NeuralDepth.getInputSize(model: depthai.DeviceModelZoo) -> tuple[int, int]: tuple[int, int]Get input size for specific model
method
method
setRectification(self, enable: bool) -> NeuralDepth: NeuralDepthEnable or disable rectification (useful for prerectified inputs)
property
confidence
Output confidence ImgFrame
property
depth
Output depth ImgFrame
property
disparity
Output disparity ImgFrame
property
edge
Output edge ImgFrame
property
initialConfig
Initial config to use for NeuralDepth.
property
inputConfig
Input config to modify parameters in runtime.
property
left
Input for left ImgFrame of left-right pair
property
property
property
property
rectifiedLeft
Output for rectified left ImgFrame
property
rectifiedRight
Output for rectified right ImgFrame
property
right
Input for right ImgFrame of left-right pair
property
class
depthai.node.NeuralNetwork(depthai.DeviceNode)
class
method
method
getNNArchive(self) -> depthai.NNArchive|None: depthai.NNArchive|NoneGet the archive owned by this Node. Returns: constant reference to this Nodes archive
method
getNumInferenceThreads(self) -> int: intHow many inference threads will be used to run the network Returns: Number of threads, 0, 1 or 2. Zero means AUTO
method
setBackend(self, setBackend: str)Specifies backend to use Parameter ``backend``: String specifying backend to use
method
setBackendProperties(self, setBackendProperties: dict
[
str
,
str
])Set backend properties Parameter ``backendProperties``: backend properties map
method
method
setBlobPath(self, path: os.PathLike)Load network blob into assets and use once pipeline is started. Throws: Error if file doesn't exist or isn't a valid network blob. Parameter ``path``: Path to network blob
method
setFromModelZoo(self, description: depthai.NNModelDescription, useCached: bool)Download model from zoo and set it for this Node Parameter ``description:``: Model description to download Parameter ``useCached:``: Use cached model if available
method
setModelFromDeviceZoo(self, model: depthai.DeviceModelZoo)Set model from Device Model Zoo Parameter ``model``: DeviceModelZoo model enum @note Only applicable for RVC4 devices with OS 1.20.5 or higher
method
setModelPath(self, modelPath: os.PathLike)Load network xml and bin files into assets. Parameter ``xmlModelPath``: Path to the neural network model file.
method
method
setNumInferenceThreads(self, numThreads: int)How many threads should the node use to run the network. Parameter ``numThreads``: Number of threads to dedicate to this node
method
setNumNCEPerInferenceThread(self, numNCEPerThread: int)How many Neural Compute Engines should a single thread use for inference Parameter ``numNCEPerThread``: Number of NCE per thread
method
setNumPoolFrames(self, numFrames: int)Specifies how many frames will be available in the pool Parameter ``numFrames``: How many frames will pool have
method
setNumShavesPerInferenceThread(self, numShavesPerInferenceThread: int)How many Shaves should a single thread use for inference Parameter ``numShavesPerThread``: Number of shaves per thread
property
input
Input message with data to be inferred upon
property
inputs
Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1 and waits for messages
property
out
Outputs NNData message that carries inference results
property
passthrough
Passthrough message on which the inference was performed. Suitable for when input queue is set to non-blocking behavior.
property
passthroughs
Passthroughs which correspond to specified input
class
depthai.node.NeuralNetwork.Model
method
class
depthai.node.ObjectTracker(depthai.DeviceNode)
method
setDetectionLabelsToTrack(self, labels: list
[
int
])Specify detection labels to track. Parameter ``labels``: Detection labels to track. Default every label is tracked from image detection network output.
method
setMaxObjectsToTrack(self, maxObjectsToTrack: int)Specify maximum number of object to track. Parameter ``maxObjectsToTrack``: Maximum number of object to track. Maximum 60 in case of SHORT_TERM_KCF, otherwise 1000.
method
setOcclusionRatioThreshold(self, threshold: float)Set the occlusion ratio threshold. Used to filter out overlapping tracklets. Parameter ``theshold``: Occlusion ratio threshold. Default 0.3.
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
method
setSpatialAssociation(self, enabled: bool)Enable or disable spatially-aware association. If disabled, only 2D association is used. Parameter ``enabled``: `true` enables spatially-aware association, `false` uses 2D-only association. Default is false.
method
setSpatialAssociationWeight(self, weight: float)Set spatial association weight in [0,1]. Parameter ``weight``: Spatial association weight in [0,1] used to blend 2D and spatial association scores (0 = 2D-only scoring, 1 = spatial-only scoring). This weight affects candidate scoring only; final acceptance still requires passing the 2D IoU threshold gate. Default is 0.5.
method
setSpatialDepthAwareScale(self, scale: float)Set depth-aware gating scale used for spatial association. Increases gating threshold with increased depth. Parameter ``scale``: Depth-aware gating scale factor. Default is 0.35
method
setSpatialDistanceThreshold(self, thresholdMeters: float)Set base 3D gating threshold in meters for spatial association. Parameter ``thresholdMeters``: Base spatial gating distance in meters. Default is 1.5m.
method
setTrackerIdAssignmentPolicy(self, type: depthai.TrackerIdAssignmentPolicy)Specify tracker ID assignment policy. Parameter ``type``: Tracker ID assignment policy.
method
setTrackerThreshold(self, threshold: float)Specify tracker threshold. Parameter ``threshold``: Above this threshold the detected objects will be tracked. Default 0, all image detections are tracked.
method
setTrackerType(self, type: depthai.TrackerType)Specify tracker type algorithm. Parameter ``type``: Tracker type.
method
setTrackingPerClass(self, trackingPerClass: bool)Whether tracker should take into consideration class label for tracking.
method
setTrackletBirthThreshold(self, trackletBirthThreshold: int)Set the tracklet birth threshold. Minimum consecutive tracked frames required to consider a tracklet as a new (TRACKED) instance. Parameter ``trackletBirthThreshold``: Tracklet birth threshold. Default 3.
method
setTrackletMaxLifespan(self, trackletMaxLifespan: int)Set the tracklet lifespan in number of frames. Number of frames after which a LOST tracklet is removed. Parameter ``trackletMaxLifespan``: Tracklet lifespan in number of frames. Default 120.
property
inputConfig
Input ObjectTrackerConfig message with ability to modify parameters at runtime. Default queue is non-blocking with size 4.
property
inputDetectionFrame
Input ImgFrame message on which object detection was performed. Default queue is non-blocking with size 4.
property
inputDetections
Input message with image detection from neural network. Default queue is non- blocking with size 4.
property
inputTrackerFrame
Input ImgFrame message on which tracking will be performed. RGBp, BGRp, NV12, YUV420p types are supported. Default queue is non-blocking with size 4.
property
out
Outputs Tracklets message that carries object tracking results.
property
passthroughDetectionFrame
Passthrough ImgFrame message on which object detection was performed. Suitable for when input queue is set to non-blocking behavior.
property
passthroughDetections
Passthrough image detections message from neural network output. Suitable for when input queue is set to non-blocking behavior.
property
passthroughTrackerFrame
Passthrough ImgFrame message on which tracking was performed. Suitable for when input queue is set to non-blocking behavior.
class
depthai.node.PointCloud(depthai.DeviceNode)
method
setNumFramesPool(self, numFramesPool: int)Specify number of frames in pool. Parameter ``numFramesPool``: How many frames should the pool have
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on host.
method
method
useCPU(self)Use single-threaded CPU for processing
method
useCPUMT(self, numThreads: int = 2)Use multi-threaded CPU for processing
method
useGPU(self, device: int = 0)Use GPU for point cloud computation Parameter ``device``: GPU device index (default 0)
property
initialConfig
Initial config to use when computing the point cloud.
property
property
inputConfig
Input PointCloudConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
property
outputPointCloud
Outputs PointCloudData message
property
passthroughDepth
Passthrough depth from which the point cloud was calculated. Suitable for when input queue is set to non-blocking behavior.
class
depthai.node.RGBD(depthai.node.ThreadedHostNode)
method
method
printDevices(self)Print available GPU devices
method
method
useCPU(self)Use single-threaded CPU for processing
method
useCPUMT(self, numThreads: int = 2)Use multi-threaded CPU for processing Parameter ``numThreads``: Number of threads to use
method
useGPU(self, device: int = 0)Use GPU for processing (needs to be compiled with Kompute support) Parameter ``device``: GPU device index
property
property
property
pcl
Output point cloud.
property
rgbd
Output RGBD frames.
class
depthai.node.RTABMapSLAM(depthai.node.ThreadedHostNode)
method
method
method
setAlphaScaling(self, alpha: float)Set the alpha scaling factor for the camera model.
method
setDatabasePath(self, path: str)Set RTABMap database path. "/tmp/rtabmap.tmp.db" by default.
method
setFreq(self, f: float)Set the frequency at which the node processes data. 1Hz by default.
method
setLoadDatabaseOnStart(self, load: bool)Whether to load the database on start. False by default.
method
method
setParams(self, params: dict
[
str
,
str
])Set RTABMap parameters. For the list of all parameters visit https://github.com/introlab/rtabmap/blob/master/corelib/include/rtabmap/core/Par ameters.h
method
setPublishGrid(self, publish: bool)Whether to publish the ground point cloud. True by default.
method
setPublishGroundCloud(self, publish: bool)Whether to publish the ground point cloud. True by default.
method
setPublishObstacleCloud(self, publish: bool)Whether to publish the obstacle point cloud. True by default.
method
setSaveDatabaseOnClose(self, save: bool)Whether to save the database on close. False by default.
method
setSaveDatabasePeriod(self, period: float)Set the interval at which the database is saved. 30.0s by default.
method
setSaveDatabasePeriodically(self, save: bool)Whether to save the database periodically. False by default.
method
setUseFeatures(self, useFeatures: bool)Whether to use input features for SLAM. False by default.
method
triggerNewMap(self)Trigger a new map.
property
property
features
Input tracked features on which SLAM is performed (optional).
property
groundPCL
Output ground point cloud.
property
obstaclePCL
Output obstacle point cloud.
property
occupancyGridMap
Output occupancy grid map.
property
odom
Input odometry pose.
property
odomCorrection
Output odometry correction (map to odom).
property
passthroughDepth
Output passthrough depth image.
property
passthroughFeatures
Output passthrough features.
property
passthroughOdom
Output passthrough odometry pose.
property
passthroughRect
Output passthrough rectified image.
property
property
transform
Output transform.
class
depthai.node.RTABMapVIO(depthai.node.ThreadedHostNode)
method
reset(self, transform: depthai.TransformData)Reset Odometry.
method
method
setParams(self, params: dict
[
str
,
str
])Set RTABMap parameters.
method
setUseFeatures(self, useFeatures: bool)Whether to use input features or calculate them internally.
property
property
features
Input tracked features on which VIO is performed (optional).
property
imu
Input IMU data.
property
passthroughDepth
Passthrough depth frame.
property
passthroughFeatures
Passthrough features.
property
passthroughRect
Passthrough rectified frame.
property
property
transform
Output transform.
class
depthai.node.RecordMetadataOnly(depthai.node.ThreadedHostNode)
method
method
method
method
property
input
Input IMU messages to be recorded (will support other types in the future) Default queue is blocking with size 8
class
depthai.node.RecordVideo(depthai.node.ThreadedHostNode)
method
method
method
method
method
method
method
property
input
Input for ImgFrame or EncodedFrame messages to be recorded Default queue is blocking with size 15
class
depthai.node.Rectification(depthai.DeviceNode)
method
enableRectification(self, enable: bool) -> Rectification: RectificationEnable or disable rectification (useful for minimal changes during debugging)
method
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
property
input1
Input images to be rectified
property
property
output1
Send outputs
property
property
passthrough1
Passthrough for input messages (so the node can be placed between other nodes)
property
class
depthai.node.ReplayMetadataOnly(depthai.node.ThreadedHostNode)
method
method
method
method
method
method
property
out
Output for any type of messages to be transferred over XLink stream Default queue is blocking with size 8
class
depthai.node.ReplayVideo(depthai.node.ThreadedHostNode)
method
method
method
method
method
method
method
method
method
method
method
method
property
out
Output for any type of messages to be transferred over XLink stream Default queue is blocking with size 8
class
depthai.node.SPIIn(depthai.DeviceNode)
method
getBusId(self) -> int: intGet bus id
method
getMaxDataSize(self) -> int: intGet maximum messages size in bytes
method
getNumFrames(self) -> int: intGet number of frames in pool
method
getStreamName(self) -> str: strGet stream name
method
setBusId(self, id: int)Specifies SPI Bus number to use Parameter ``id``: SPI Bus id
method
setMaxDataSize(self, maxDataSize: int)Set maximum message size it can receive Parameter ``maxDataSize``: Maximum size in bytes
method
setNumFrames(self, numFrames: int)Set number of frames in pool for sending messages forward Parameter ``numFrames``: Maximum number of frames in pool
method
setStreamName(self, name: str)Specifies stream name over which the node will receive data Parameter ``name``: Stream name
property
out
Outputs message of same type as send from host.
class
depthai.node.SPIOut(depthai.DeviceNode)
method
setBusId(self, id: int)Specifies SPI Bus number to use Parameter ``id``: SPI Bus id
method
setStreamName(self, name: str)Specifies stream name over which the node will send data Parameter ``name``: Stream name
property
input
Input for any type of messages to be transferred over SPI stream Default queue is blocking with size 8
class
depthai.node.Script(depthai.DeviceNode)
method
getProcessor(self) -> depthai.ProcessorType: depthai.ProcessorTypeGet on which processor the script should run Returns: Processor type - Leon CSS or Leon MSS
method
getScriptName(self) -> str: strGet the script name in utf-8. When name set with setScript() or setScriptPath(), returns that name. When script loaded with setScriptPath() with name not provided, returns the utf-8 string of that path. Otherwise, returns "<script>" Returns: std::string of script name in utf-8
method
setProcessor(self, arg0: depthai.ProcessorType)Set on which processor the script should run Parameter ``type``: Processor type - Leon CSS or Leon MSS
method
method
property
property
class
depthai.node.SegmentationParser(depthai.DeviceNode)
method
method
getBackgroundClass(self) -> bool: boolGets whether the first class (index 0) is considered the background class.
method
getLabels(self) -> list[str]: list[str]Returns the class labels associated with the segmentation mask.
method
runOnHost(self) -> bool: boolCheck if the node is set to run on host
method
setBackgroundClass(self, backgroundClass: bool)Sets whether the first class (index 0) is considered the background class. If true, the pixels classified as index 0 will be treated as background. Parameter ``backgroundClass``: Boolean indicating if the first class is the background class @note Only applicable if the number of classes is greater than 1 and the output classes are not in a single layer (eg. classesInOneLayer = false).
method
setLabels(self, labels: list
[
str
])Sets the class labels associated with the segmentation mask. The label at index $i$ in the `labels` vector corresponds to the value $i$ in the segmentation mask data array. Parameter ``labels``: Vector of class labels
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
property
initialConfig
Initial config to use when parsing segmentation masks.
property
input
Input NN results with segmentation data to parser
property
inputConfig
Input SegmentationParserConfig message with ability to modify parameters in runtime.
property
out
Outputs segmentation mask
class
depthai.node.SpatialDetectionNetwork(depthai.DeviceNode)
class
method
method
getClasses(self) -> list[str]|None: list[str]|NoneGet classes labels
method
getConfidenceThreshold(self) -> float: floatRetrieves threshold at which to filter the rest of the detections. Returns: Detection confidence
method
getNumInferenceThreads(self) -> int: intHow many inference threads will be used to run the network Returns: Number of threads, 0, 1 or 2. Zero means AUTO
method
setBackend(self, setBackend: str)Specifies backend to use Parameter ``backend``: String specifying backend to use
method
setBackendProperties(self, setBackendProperties: dict
[
str
,
str
])Set backend properties Parameter ``backendProperties``: backend properties map
method
method
setBlobPath(self, path: os.PathLike)Load network blob into assets and use once pipeline is started. Throws: Error if file doesn't exist or isn't a valid network blob. Parameter ``path``: Path to network blob
method
setBoundingBoxScaleFactor(self, scaleFactor: float)Custom interface Specifies scale factor for detected bounding boxes. Parameter ``scaleFactor``: Scale factor must be in the interval (0,1].
method
setConfidenceThreshold(self, thresh: float)Specifies confidence threshold at which to filter the rest of the detections. Parameter ``thresh``: Detection confidence must be greater than specified threshold to be added to the list
method
setDepthLowerThreshold(self, lowerThreshold: int)Specifies lower threshold in depth units (millimeter by default) for depth values which will used to calculate spatial data Parameter ``lowerThreshold``: LowerThreshold must be in the interval [0,upperThreshold] and less than upperThreshold.
method
setDepthUpperThreshold(self, upperThreshold: int)Specifies upper threshold in depth units (millimeter by default) for depth values which will used to calculate spatial data Parameter ``upperThreshold``: UpperThreshold must be in the interval (lowerThreshold,65535].
method
setFromModelZoo(self, description: depthai.NNModelDescription, useCached: bool)Download model from zoo and set it for this Node Parameter ``description:``: Model description to download Parameter ``useCached:``: Use cached model if available
method
setModelPath(self, modelPath: os.PathLike)Load network file into assets. Parameter ``modelPath``: Path to the model file.
method
method
setNumInferenceThreads(self, numThreads: int)How many threads should the node use to run the network. Parameter ``numThreads``: Number of threads to dedicate to this node
method
setNumNCEPerInferenceThread(self, numNCEPerThread: int)How many Neural Compute Engines should a single thread use for inference Parameter ``numNCEPerThread``: Number of NCE per thread
method
setNumPoolFrames(self, numFrames: int)Specifies how many frames will be available in the pool Parameter ``numFrames``: How many frames will pool have
method
setNumShavesPerInferenceThread(self, numShavesPerInferenceThread: int)How many Shaves should a single thread use for inference Parameter ``numShavesPerThread``: Number of shaves per thread
method
setSpatialCalculationAlgorithm(self, calculationAlgorithm: depthai.SpatialLocationCalculatorAlgorithm)Specifies spatial location calculator algorithm: Average/Min/Max Parameter ``calculationAlgorithm``: Calculation algorithm.
property
property
input
Input message with data to be inferred upon
property
inputDepth
Input message with depth data used to retrieve spatial information about detected object Default queue is non-blocking with size 4
property
property
out
Outputs ImgDetections message that carries parsed detection results.
property
outNetwork
Outputs unparsed inference results.
property
passthrough
Passthrough message on which the inference was performed. Suitable for when input queue is set to non-blocking behavior.
property
passthroughDepth
Passthrough message for depth frame on which the spatial location calculation was performed. Suitable for when input queue is set to non-blocking behavior.
property
class
depthai.node.SpatialDetectionNetwork.Model
method
class
depthai.node.SpatialLocationCalculator(depthai.DeviceNode)
method
runOnHost(self) -> bool: boolCheck if the node is set to run on host
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
property
initialConfig
Initial config to use when calculating spatial location data.
property
inputConfig
Input SpatialLocationCalculatorConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
inputDepth
Input message with depth data used to retrieve spatial information about detected object. Default queue is non-blocking with size 4.
property
inputDetections
Input messages on which spatial location will be calculated. Possible datatypes are ImgDetections or Keypoints.
property
out
Outputs SpatialLocationCalculatorData message that carries spatial locations for each additional ROI that is specified in the config.
property
outputDetections
Outputs SpatialImgDetections message that carries spatial locations along with original input data.
property
passthroughDepth
Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
class
depthai.node.StereoDepth(depthai.DeviceNode)
class
PresetMode
Preset modes for stereo depth. Members: FAST_ACCURACY FAST_DENSITY DEFAULT FACE HIGH_DETAIL ROBOTICS DENSITY ACCURACY
method
method
method
enableDistortionCorrection(self, arg0: bool)Equivalent to useHomographyRectification(!enableDistortionCorrection)
method
loadMeshData()Specify mesh calibration data for 'left' and 'right' inputs, as vectors of bytes. Overrides useHomographyRectification behavior. See `loadMeshFiles` for the expected data format
method
loadMeshFiles(self, pathLeft: os.PathLike, pathRight: os.PathLike)Specify local filesystem paths to the mesh calibration files for 'left' and 'right' inputs. When a mesh calibration is set, it overrides the camera intrinsics/extrinsics matrices. Overrides useHomographyRectification behavior. Mesh format: a sequence of (y,x) points as 'float' with coordinates from the input image to be mapped in the output. The mesh can be subsampled, configured by `setMeshStep`. With a 1280x800 resolution and the default (16,16) step, the required mesh size is: width: 1280 / 16 + 1 = 81 height: 800 / 16 + 1 = 51
method
setAlphaScaling(self, arg0: float)Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). On some high distortion lenses, and/or due to rectification (image rotated) invalid areas may appear even with alpha=0, in these cases alpha < 0.0 helps removing invalid areas. See getOptimalNewCameraMatrix from opencv for more details.
method
setBaseline(self, arg0: float)Override baseline from calibration. Used only in disparity to depth conversion. Units are centimeters.
method
setDefaultProfilePreset(self, arg0: StereoDepth.PresetMode)Sets a default preset based on specified option. Parameter ``mode``: Stereo depth preset mode
method
method
setDepthAlignmentUseSpecTranslation(self, arg0: bool)Use baseline information for depth alignment from specs (design data) or from calibration. Default: true
method
setDisparityToDepthUseSpecTranslation(self, arg0: bool)Use baseline information for disparity to depth conversion from specs (design data) or from calibration. Default: true
method
setExtendedDisparity(self, enable: bool)Disparity range increased from 0-95 to 0-190, combined from full resolution and downscaled images. Suitable for short range objects. Currently incompatible with sub-pixel disparity
method
setFocalLength(self, arg0: float)Override focal length from calibration. Used only in disparity to depth conversion. Units are pixels.
method
method
setLeftRightCheck(self, enable: bool)Computes and combines disparities in both L-R and R-L directions, and combine them. For better occlusion handling, discarding invalid disparity values
method
setMeshStep(self, width: int, height: int)Set the distance between mesh points. Default: (16, 16)
method
setNumFramesPool(self, arg0: int)Specify number of frames in pool. Parameter ``numFramesPool``: How many frames should the pool have
method
setOutputKeepAspectRatio(self, keep: bool)Specifies whether the frames resized by `setOutputSize` should preserve aspect ratio, with potential cropping when enabled. Default `true`
method
setOutputSize(self, width: int, height: int)Specify disparity/depth output resolution size, implemented by scaling. Currently only applicable when aligning to RGB camera
method
setPostProcessingHardwareResources(self, arg0: int, arg1: int)Specify allocated hardware resources for stereo depth. Suitable only to increase post processing runtime. Parameter ``numShaves``: Number of shaves. Parameter ``numMemorySlices``: Number of memory slices.
method
setRectification(self, enable: bool)Rectify input images or not.
method
setRectificationUseSpecTranslation(self, arg0: bool)Obtain rectification matrices using spec translation (design data) or from calibration in calculations. Should be used only for debugging. Default: false
method
setRectifyEdgeFillColor(self, color: int)Fill color for missing data at frame edges Parameter ``color``: Grayscale 0..255, or -1 to replicate pixels
method
setRuntimeModeSwitch(self, arg0: bool)Enable runtime stereo mode switch, e.g. from standard to LR-check. Note: when enabled resources allocated for worst case to enable switching to any mode.
method
setSubpixel(self, enable: bool)Computes disparity with sub-pixel interpolation (3 fractional bits by default). Suitable for long range. Currently incompatible with extended disparity
method
setSubpixelFractionalBits(self, subpixelFractionalBits: int)Number of fractional bits for subpixel mode. Default value: 3. Valid values: 3,4,5. Defines the number of fractional disparities: 2^x. Median filter postprocessing is supported only for 3 fractional bits.
method
useHomographyRectification(self, arg0: bool)Use 3x3 homography matrix for stereo rectification instead of sparse mesh generated on device. Default behaviour is AUTO, for lenses with FOV over 85 degrees sparse mesh is used, otherwise 3x3 homography. If custom mesh data is provided through loadMeshData or loadMeshFiles this option is ignored. Parameter ``useHomographyRectification``: true: 3x3 homography matrix generated from calibration data is used for stereo rectification, can't correct lens distortion. false: sparse mesh is generated on-device from calibration data with mesh step specified with setMeshStep (Default: (16, 16)), can correct lens distortion. Implementation for generating the mesh is same as opencv's initUndistortRectifyMap function. Only the first 8 distortion coefficients are used from calibration data.
property
confidenceMap
Outputs ImgFrame message that carries RAW8 confidence map. Lower values mean lower confidence of the calculated disparity value. RGB alignment, left-right check or any postprocessing (e.g., median filter) is not performed on confidence map.
property
debugDispCostDump
Outputs ImgFrame message that carries cost dump of disparity map. Useful for debugging/fine tuning.
property
debugDispLrCheckIt1
Outputs ImgFrame message that carries left-right check first iteration (before combining with second iteration) disparity map. Useful for debugging/fine tuning.
property
debugDispLrCheckIt2
Outputs ImgFrame message that carries left-right check second iteration (before combining with first iteration) disparity map. Useful for debugging/fine tuning.
property
debugExtDispLrCheckIt1
Outputs ImgFrame message that carries extended left-right check first iteration (downscaled frame, before combining with second iteration) disparity map. Useful for debugging/fine tuning.
property
debugExtDispLrCheckIt2
Outputs ImgFrame message that carries extended left-right check second iteration (downscaled frame, before combining with first iteration) disparity map. Useful for debugging/fine tuning.
property
depth
Outputs ImgFrame message that carries RAW16 encoded (0..65535) depth data in depth units (millimeter by default). Non-determined / invalid depth values are set to 0
property
disparity
Outputs ImgFrame message that carries RAW8 / RAW16 encoded disparity data: RAW8 encoded (0..95) for standard mode; RAW8 encoded (0..190) for extended disparity mode; RAW16 encoded for subpixel disparity mode: - 0..760 for 3 fractional bits (by default) - 0..1520 for 4 fractional bits - 0..3040 for 5 fractional bits
property
initialConfig
Initial config to use for StereoDepth.
property
inputAlignTo
Input align to message. Default queue is non-blocking with size 1.
property
inputConfig
Input StereoDepthConfig message with ability to modify parameters in runtime.
property
left
Input for left ImgFrame of left-right pair
property
outConfig
Outputs StereoDepthConfig message that contains current stereo configuration.
property
rectifiedLeft
Outputs ImgFrame message that carries RAW8 encoded (grayscale) rectified frame data.
property
rectifiedRight
Outputs ImgFrame message that carries RAW8 encoded (grayscale) rectified frame data.
property
right
Input for right ImgFrame of left-right pair
property
syncedLeft
Passthrough ImgFrame message from 'left' Input.
property
syncedRight
Passthrough ImgFrame message from 'right' Input.
class
depthai.node.StereoDepth.PresetMode
variable
variable
variable
variable
variable
variable
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
class
depthai.node.Sync(depthai.DeviceNode)
method
getProcessor(self) -> depthai.ProcessorType: depthai.ProcessorTypeGet on which processor the node should run Returns: Processor type - Leon CSS or Leon MSS
method
getSyncAttempts(self) -> int: intGets the number of sync attempts
method
getSyncThreshold(self) -> datetime.timedelta: datetime.timedeltaGets the maximal interval between messages in the group in milliseconds
method
runOnHost(self) -> bool: boolCheck if the node is set to run on host
method
setProcessor(self, arg0: depthai.ProcessorType)Specify on which processor the node should run. RVC2 only. Parameter ``type``: Processor type - Leon CSS or Leon MSS
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
method
setSyncAttempts(self, maxDataSize: int)Set the number of attempts to get the specified max interval between messages in the group Parameter ``syncAttempts``: Number of attempts to get the specified max interval between messages in the group: - if syncAttempts = 0 then the node sends a message as soon at the group is filled - if syncAttempts > 0 then the node will make syncAttemts attempts to synchronize before sending out a message - if syncAttempts = -1 (default) then the node will only send a message if successfully synchronized
method
setSyncThreshold(self, syncThreshold: datetime.timedelta)Set the maximal interval between messages in the group Parameter ``syncThreshold``: Maximal interval between messages in the group
property
inputs
A map of inputs
property
class
depthai.node.SystemLogger(depthai.DeviceNode)
method
getRate(self) -> float: floatGets logging rate, at which messages will be sent out
method
setRate(self, hz: float)Specify logging rate, at which messages will be sent out Parameter ``hz``: Sending rate in hertz (messages per second)
property
out
Outputs SystemInformation[RVC4] message that carries various system information like memory and CPU usage, temperatures, ... For series 2 devices output SystemInformation message, for series 4 devices output SystemInformationRVC4 message
class
depthai.node.Thermal(depthai.DeviceNode)
method
build(self, boardSocket: depthai.CameraBoardSocket = ..., fps: float = 25.0) -> Thermal: ThermalBuild with a specific board socket and fps.
method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocketRetrieves which board socket to use Returns: Board socket to use
property
color
Outputs YUV422i grayscale thermal image.
property
initialConfig
Initial config to use for thermal sensor.
property
inputConfig
Input ThermalConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
temperature
Outputs FP16 (degC) thermal image.
class
depthai.node.ThreadedHostNode(depthai.ThreadedNode)
method
method
method
method
method
method
method
class
depthai.node.ToF(depthai.DeviceNodeGroup)
static method
method
method
method
property
amplitude
Amplitude output
property
depth
Filtered depth output
property
imageFiltersInputConfig
Input config for image filters
property
imageFiltersNode
Image filters node
property
intensity
Intensity output
property
phase
Phase output
property
rawDepth
Raw depth output from ToF sensor
property
tofBaseInputConfig
Input config for ToF base node
property
tofBaseNode
ToF base node
class
depthai.node.ToFBase(depthai.DeviceNode)
method
build(self, boardSocket: depthai.CameraBoardSocket = ..., presetMode: depthai.ImageFiltersPresetMode = ..., fps: float
|
None = None) -> ToFBase: ToFBaseBuild with a specific board socket
method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocketRetrieves which board socket to use Returns: Board socket to use
property
property
property
initialConfig
Initial config to use for feature tracking.
property
inputConfig
Input ToFConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
property
class
depthai.node.ToFDepthConfidenceFilter(depthai.DeviceNode)
method
method
runOnHost(self) -> bool: boolCheck if the node is set to run on host
method
setRunOnHost(self, runOnHost: bool)Specify whether to run on host or device By default, the node will run on device.
property
amplitude
Amplitude frame image, expected ImgFrame type is RAW8 or RAW16.
property
confidence
RAW16 encoded confidence frame
property
depth
Depth frame image, expected ImgFrame type is RAW8 or RAW16.
property
filteredDepth
RAW16 encoded filtered depth frame
property
initialConfig
Initial config for ToF depth confidence filter.
property
inputConfig
Config message for runtime filter configuration
class
depthai.node.UVC(depthai.DeviceNode)
method
setGpiosOnInit(self, list: dict
[
int
,
int
])Set GPIO list <gpio_number, value> for GPIOs to set (on/off) at init
method
setGpiosOnStreamOff(self, list: dict
[
int
,
int
])Set GPIO list <gpio_number, value> for GPIOs to set when streaming is disabled
method
setGpiosOnStreamOn(self, list: dict
[
int
,
int
])Set GPIO list <gpio_number, value> for GPIOs to set when streaming is enabled
property
input
Input for image frames to be streamed over UVC Default queue is blocking with size 8
class
depthai.node.VideoEncoder(depthai.DeviceNode)
method
method
method
getBitrate(self) -> int: intGet bitrate in bps
method
getBitrateKbps(self) -> int: intGet bitrate in kbps
method
getFrameRate(self) -> float: floatGet frame rate
method
getKeyframeFrequency(self) -> int: intGet keyframe frequency
method
getLossless(self) -> bool: boolGet lossless mode. Applies only when using [M]JPEG profile.
method
method
getNumBFrames(self) -> int: intGet number of B frames
method
getNumFramesPool(self) -> int: intGet number of frames in pool Returns: Number of pool frames
method
getProfile(self) -> depthai.VideoEncoderProperties.Profile: depthai.VideoEncoderProperties.ProfileGet profile
method
getQuality(self) -> int: intGet quality
method
getRateControlMode(self) -> depthai.VideoEncoderProperties.RateControlMode: depthai.VideoEncoderProperties.RateControlModeGet rate control mode
method
setBitrate(self, bitrate: int)Set output bitrate in bps, for CBR rate control mode. 0 for auto (based on frame size and FPS)
method
setBitrateKbps(self, bitrateKbps: int)Set output bitrate in kbps, for CBR rate control mode. 0 for auto (based on frame size and FPS)
method
setDefaultProfilePreset(self, fps: float, profile: depthai.VideoEncoderProperties.Profile)Sets a default preset based on specified frame rate and profile Parameter ``fps``: Frame rate in frames per second Parameter ``profile``: Encoding profile
method
setFrameRate(self, frameRate: float)Sets expected frame rate Parameter ``frameRate``: Frame rate in frames per second
method
setKeyframeFrequency(self, freq: int)Set keyframe frequency. Every Nth frame a keyframe is inserted. Applicable only to H264 and H265 profiles Examples: - 30 FPS video, keyframe frequency: 30. Every 1s a keyframe will be inserted - 60 FPS video, keyframe frequency: 180. Every 3s a keyframe will be inserted
method
setLossless(self, arg0: bool)Set lossless mode. Applies only to [M]JPEG profile Parameter ``lossless``: True to enable lossless jpeg encoding, false otherwise
method
setMaxOutputFrameSize(self, maxFrameSize: int)Specifies maximum output encoded frame size
method
setNumBFrames(self, numBFrames: int)Set number of B frames to be inserted
method
setNumFramesPool(self, frames: int)Set number of frames in pool Parameter ``frames``: Number of pool frames
method
setProfile(self, profile: depthai.VideoEncoderProperties.Profile)Set encoding profile
method
setQuality(self, quality: int)Set quality Parameter ``quality``: Value between 0-100%. Approximates quality
method
setRateControlMode(self, mode: depthai.VideoEncoderProperties.RateControlMode)Set rate control mode
property
bitstream
Outputs ImgFrame message that carries BITSTREAM encoded (MJPEG, H264 or H265) frame data. Mutually exclusive with out.
property
input
Input for NV12 ImgFrame to be encoded
property
out
Outputs EncodedFrame message that carries encoded (MJPEG, H264 or H265) frame data. Mutually exclusive with bitstream.
class
depthai.node.Vpp(depthai.DeviceNode)
method
property
property
property
initialConfig
Initial config of the node.
method
property
property
property
leftOut
Output ImgFrame message that carries the processed left image with virtual projection pattern applied.
property
property
rightOut
Output ImgFrame message that carries the processed right image with virtual projection pattern applied.
property
syncedInputs
"Synchronised Left Img, Right Img, Dispatiy and confidence input."
class
depthai.node.Warp(depthai.DeviceNode)
method
getHwIds(self) -> list[int]: list[int]Retrieve which hardware warp engines to use
method
getInterpolation(self) -> depthai.Interpolation: depthai.InterpolationRetrieve which interpolation method to use
method
setHwIds(self, arg0: list
[
int
])Specify which hardware warp engines to use Parameter ``ids``: Which warp engines to use (0, 1, 2)
method
setInterpolation(self, arg0: depthai.Interpolation)Specify which interpolation method to use Parameter ``interpolation``: type of interpolation
method
setMaxOutputFrameSize(self, arg0: int)Specify maximum size of output image. Parameter ``maxFrameSize``: Maximum frame size in bytes
method
setNumFramesPool(self, arg0: int)Specify number of frames in pool. Parameter ``numFramesPool``: How many frames should the pool have
method
method
property
inputImage
Input image to be modified Default queue is blocking with size 8
property
out
Outputs ImgFrame message that carries warped image.