ON THIS PAGE

  • DepthAI Python API

DepthAI Python API

DepthAI Python API can be found on Github luxonis/depthai-python. Below is the reference documentation for the Python API.
package

depthai

module
modelzoo
Model Zoo
package
package
class
ADatatype
Abstract message
class
AprilTag
AprilTag structure.
class
AprilTagConfig
AprilTagConfig message.
class
AprilTagProperties
Specify properties for AprilTag
class
AprilTags
AprilTags message.
class
Asset
Asset is identified with string key and can store arbitrary binary data
class
AssetManager
AssetManager can store assets and serialize
class
BenchmarkReport
BenchmarkReport message.
class
class
Buffer
Base message - buffer of binary data
class
CalibrationHandler
CalibrationHandler is an interface to read/load/write structured calibration and device data. The following fields are protected and aren't allowed to be overridden by default: - boardName - boardRev - boardConf - hardwareConf - batchName - batchTime - boardOptions - productName
class
CameraBoardSocket
Which Camera socket to use.  AUTO denotes that the decision will be made by device  Members:    AUTO    CAM_A    CAM_B    CAM_C    CAM_D    VERTICAL    CAM_E    CAM_F    CAM_G    CAM_H    RGB : **Deprecated:** Use CAM_A or address camera by name instead    LEFT : **Deprecated:** Use CAM_B or address camera by name instead    RIGHT : **Deprecated:** Use CAM_C or address camera by name instead    CENTER : **Deprecated:** Use CAM_A or address camera by name instead
class
CameraControl
CameraControl message. Specifies various camera control commands like:  - Still capture  - Auto/manual focus  - Auto/manual white balance  - Auto/manual exposure  - Anti banding  - ...  By default the camera enables 3A, with auto-focus in `CONTINUOUS_VIDEO` mode, auto-white-balance in `AUTO` mode, and auto-exposure with anti-banding for 50Hz mains frequency.
class
CameraExposureOffset
Members:    START    MIDDLE    END
class
CameraFeatures
CameraFeatures structure  Characterizes detected cameras on board
class
CameraImageOrientation
Camera sensor image orientation / pixel readout. This exposes direct sensor settings. 90 or 270 degrees rotation is not available.  AUTO denotes that the decision will be made by device (e.g. on OAK-1/megaAI: ROTATE_180_DEG).  Members:    AUTO    NORMAL    HORIZONTAL_MIRROR    VERTICAL_FLIP    ROTATE_180_DEG
class
CameraInfo
CameraInfo structure
class
CameraModel
Which CameraModel to initialize the calibration with.  Members:    Perspective    Fisheye    Equirectangular    RadialDivision
class
CameraSensorConfig
Sensor config
class
CameraSensorType
Camera sensor type  Members:    COLOR    MONO    TOF    THERMAL
class
class
class
class
class
class
ChipTemperature
Chip temperature information.  Multiple temperature measurement points and their average
class
ChipTemperatureS3
Chip temperature information.  Multiple temperature measurement points and their average
class
class
class
Color
Color structure  r,g,b,a color values with values in range [0.0, 1.0]
class
ColorCameraProperties
Specify properties for ColorCamera such as camera ID, ...
class
Colormap
Camera sensor type  Members:    NONE    JET    TURBO    STEREO_JET    STEREO_TURBO
class
CpuUsage
CpuUsage structure  Average usage in percent and time span of the average (since last query)
class
class
DatatypeEnum
Members:    ADatatype    Buffer    ImgFrame    EncodedFrame    NNData    ImageManipConfig    CameraControl    ImgDetections    SpatialImgDetections    SystemInformation    SystemInformationS3    SpatialLocationCalculatorConfig    SpatialLocationCalculatorData    EdgeDetectorConfig    AprilTagConfig    AprilTags    Tracklets    IMUData    StereoDepthConfig    FeatureTrackerConfig    ThermalConfig    ToFConfig    TrackedFeatures    BenchmarkReport    MessageGroup    TransformData    PointCloudConfig    PointCloudData    ImageAlignConfig    ImgAnnotations    RGBDData
class
DetectionNetworkType
Members:    YOLO    MOBILENET
class
DetectionParserOptions
DetectionParserOptions  Specifies how to parse output of detection networks
class
DetectionParserProperties
Specify properties for DetectionParser
class
Device
Represents the DepthAI device with the methods to interact with it. Implements the host-side queues to connect with XLinkIn and XLinkOut nodes
class
DeviceBase
The core of depthai device for RAII, connects to device and maintains watchdog, timesync, ...
class
DeviceBootloader
Represents the DepthAI bootloader with the methods to interact with it.
class
class
DeviceInfo
Describes a connected device
class
class
class
EdgeDetectorConfig
EdgeDetectorConfig message. Carries sobel edge filter config.
class
class
EdgeDetectorProperties
Specify properties for EdgeDetector
class
EepromData
EepromData structure  Contains the Calibration and Board data stored on device
exception
class
class
class
class
Extrinsics
Extrinsics structure
class
FeatureTrackerConfig
FeatureTrackerConfig message. Carries config for feature tracking algorithm
class
FeatureTrackerProperties
Specify properties for FeatureTracker
class
FrameEvent
Members:    NONE    READOUT_START    READOUT_END
class
GlobalProperties
Specify properties which apply for whole pipeline
class
IMUData
IMUData message. Carries normalized detection results
class
IMUPacket
IMU output  Contains combined output for all possible modes. Only the enabled outputs are populated.
class
class
class
IMUReportAccelerometer
Accelerometer  Units are [m/s^2]
class
IMUReportGyroscope
Gyroscope  Units are [rad/s]
class
IMUReportMagneticField
Magnetic field  Units are [uTesla]
class
IMUReportRotationVectorWAcc
Rotation Vector with Accuracy  Contains quaternion components: i,j,k,real
class
IMUSensor
Available IMU sensors. More details about each sensor can be found in the datasheet:  https://www.ceva-dsp.com/wp-content/uploads/2019/10/BNO080_085-Datasheet.pdf  Members:    ACCELEROMETER_RAW : Section 2.1.1  Acceleration of the device without any postprocessing, straight from the sensor. Units are [m/s^2]    ACCELEROMETER : Section 2.1.1  Acceleration of the device including gravity. Units are [m/s^2]    LINEAR_ACCELERATION : Section 2.1.1  Acceleration of the device with gravity removed. Units are [m/s^2]    GRAVITY : Section 2.1.1  Gravity. Units are [m/s^2]    GYROSCOPE_RAW : Section 2.1.2  The angular velocity of the device without any postprocessing, straight from the sensor. Units are [rad/s]    GYROSCOPE_CALIBRATED : Section 2.1.2  The angular velocity of the device. Units are [rad/s]    GYROSCOPE_UNCALIBRATED : Section 2.1.2  Angular velocity without bias compensation. Units are [rad/s]    MAGNETOMETER_RAW : Section 2.1.3  Magnetic field measurement without any postprocessing, straight from the sensor. Units are [uTesla]    MAGNETOMETER_CALIBRATED : Section 2.1.3  The fully calibrated magnetic field measurement. Units are [uTesla]    MAGNETOMETER_UNCALIBRATED : Section 2.1.3  The magnetic field measurement without hard-iron offset applied. Units are [uTesla]    ROTATION_VECTOR : Section 2.2  The rotation vector provides an orientation output that is expressed as a quaternion referenced to magnetic north and gravity. It is produced by fusing the outputs of the accelerometer, gyroscope and magnetometer. The rotation vector is the most accurate orientation estimate available. The magnetometer provides correction in yaw to reduce drift and the gyroscope enables the most responsive performance.    GAME_ROTATION_VECTOR : Section 2.2  The game rotation vector is an orientation output that is expressed as a quaternion with no specific reference for heading, while roll and pitch are referenced against gravity. It is produced by fusing the outputs of the accelerometer and the gyroscope (i.e. no magnetometer). The game rotation vector does not use the magnetometer to correct the gyroscopes drift in yaw. This is a deliberate omission (as specified by Google) to allow gaming applications to use a smoother representation of the orientation without the jumps that an instantaneous correction provided by a magnetic field update could provide. Long term the output will likely drift in yaw due to the characteristics of gyroscopes, but this is seen as preferable for this output versus a corrected output.    GEOMAGNETIC_ROTATION_VECTOR : Section 2.2  The geomagnetic rotation vector is an orientation output that is expressed as a quaternion referenced to magnetic north and gravity. It is produced by fusing the outputs of the accelerometer and magnetometer. The gyroscope is specifically excluded in order to produce a rotation vector output using less power than is required to produce the rotation vector of section 2.2.4. The consequences of removing the gyroscope are: Less responsive output since the highly dynamic outputs of the gyroscope are not used More errors in the presence of varying magnetic fields.    ARVR_STABILIZED_ROTATION_VECTOR : Section 2.2  Estimates of the magnetic field and the roll/pitch of the device can create a potential correction in the rotation vector produced. For applications (typically augmented or virtual reality applications) where a sudden jump can be disturbing, the output is adjusted to prevent these jumps in a manner that takes account of the velocity of the sensor system.    ARVR_STABILIZED_GAME_ROTATION_VECTOR : Section 2.2  While the magnetometer is removed from the calculation of the game rotation vector, the accelerometer itself can create a potential correction in the rotation vector produced (i.e. the estimate of gravity changes). For applications (typically augmented or virtual reality applications) where a sudden jump can be disturbing, the output is adjusted to prevent these jumps in a manner that takes account of the velocity of the sensor system. This process is called AR/VR stabilization.
class
class
ImageAlignConfig
ImageAlignConfig configuration structure
class
ImageAlignProperties
Specify properties for ImageAlign
class
ImageManipConfig
ImageManipConfig message. Specifies image manipulation options like:  - Crop  - Resize  - Warp  - ...
class
class
class
class
ImgDetections
ImgDetections message. Carries normalized detection results
class
class
class
ImgResizeMode
Members:    CROP    STRETCH    LETTERBOX
class
ImgTransformation
ImgTransformation struct. Holds information of how a ImgFrame or related message was transformed from their source. Useful for remapping from one ImgFrame to another.
class
class
Interpolation
Interpolation type  Members:    BILINEAR    BICUBIC    NEAREST_NEIGHBOR    BYPASS    DEFAULT    DEFAULT_DISPARITY_DEPTH
class
LogLevel
Members:    TRACE    DEBUG    INFO    WARN    ERR    CRITICAL    OFF
class
class
MedianFilter
Median filter config  Members:    MEDIAN_OFF    KERNEL_3x3    KERNEL_5x5    KERNEL_7x7
class
MemoryInfo
MemoryInfo structure  Free, remaining and total memory stats
class
MessageDemuxProperties
MessageDemux does not have any properties to set
class
MessageGroup
MessageGroup message. Carries multiple messages in one.
class
MessageQueue
Thread safe queue to send messages between nodes
class
ModelType
Neural network model type  Members:    BLOB    SUPERBLOB    DLC    NNARCHIVE    OTHER
class
MonoCameraProperties
Specify properties for MonoCamera such as camera ID, ...
class
class
class
class
class
class
NNData
NNData message. Carries tensors and their metadata
class
class
NeuralNetworkProperties
Specify properties for NeuralNetwork such as blob path, ...
class
Node
Abstract Node
class
class
ObjectTrackerProperties
Specify properties for ObjectTracker
class
OpenVINO
Support for basic OpenVINO related actions like version identification of neural network blobs,...
class
class
Platform
Hardware platform type  Members:    RVC2 :     RVC3 :     RVC4 : 
class
Point2f
Point2f structure  x and y coordinates that define a 2D point.
class
Point3d
Point3d structure  x,y,z coordinates that define a 3D point.
class
Point3f
Point3f structure  x,y,z coordinates that define a 3D point.
class
Point3fRGBA
Point3fRGBA structure  x,y,z coordinates and RGB color values that define a 3D point with color.
class
PointCloudConfig
PointCloudConfig message. Carries ROI (region of interest) and threshold for depth calculation
class
PointCloudData
PointCloudData message. Carries point cloud data.
class
PointCloudProperties
Specify properties for PointCloud
class
class
PointsAnnotationType
Members:    UNKNOWN    POINTS    LINE_LOOP    LINE_STRIP    LINE_LIST
class
ProcessorType
Members:    LEON_CSS    LEON_MSS    CPU    DSP
class
class
Properties
Base Properties structure
class
Quaterniond
Quaterniond structure  qx,qy,qz,qw coordinates that define a 3D point orientation.
class
RGBDData
RGBD message. Carries RGB and Depth frames.
class
RecordConfig
Configuration for recording and replaying messages
class
Rect
Rect structure  x,y coordinates together with width and height that define a rectangle. Can be either normalized [0,1] or absolute representation.
class
class
RotatedRect
RotatedRect structure
class
SPIInProperties
Properties for SPIIn node
class
SPIOutProperties
Specify properties for SPIOut node
class
ScriptProperties
Specify ScriptProperties options such as script uri, script name, ...
class
SerializationType
Members:    LIBNOP    JSON    JSON_MSGPACK
class
Size2f
Size2f structure  width, height values define the size of the shape/frame
class
class
SpatialDetectionNetworkProperties
Specify properties for SpatialDetectionNetwork
class
SpatialImgDetection
SpatialImgDetection structure  Contains image detection results together with spatial location data.
class
SpatialImgDetections
SpatialImgDetections message. Carries detection results together with spatial location data
class
SpatialLocationCalculatorAlgorithm
SpatialLocationCalculatorAlgorithm configuration modes  Contains calculation method used to obtain spatial locations.  Members:    AVERAGE    MEAN    MIN    MAX    MODE    MEDIAN
class
SpatialLocationCalculatorConfig
SpatialLocationCalculatorConfig message. Carries ROI (region of interest) and threshold for depth calculation
class
SpatialLocationCalculatorConfigData
SpatialLocation configuration data structure
class
SpatialLocationCalculatorConfigThresholds
SpatialLocation configuration thresholds structure  Contains configuration data for lower and upper threshold in depth units (millimeter by default) for ROI. Values outside of threshold range will be ignored when calculating spatial coordinates from depth map.
class
SpatialLocationCalculatorData
SpatialLocationCalculatorData message. Carries spatial information (X,Y,Z) and their configuration parameters
class
SpatialLocationCalculatorProperties
Specify properties for SpatialLocationCalculator
class
SpatialLocations
SpatialLocations structure  Contains configuration data, average depth for the calculated ROI on depth map. Together with spatial coordinates: x,y,z relative to the center of depth map. Units are in depth units (millimeter by default).
class
StereoDepthConfig
StereoDepthConfig message.
class
StereoDepthProperties
Specify properties for StereoDepth
class
StereoPair
Describes which camera sockets can be used for stereo and their baseline.
class
StereoRectification
StereoRectification structure
class
SyncProperties
Specify properties for Sync.
class
SystemInformation
SystemInformation message. Carries memory usage, cpu usage and chip temperatures.
class
SystemInformationS3
SystemInformation message for series 3 devices. Carries memory usage, cpu usage and chip temperatures.
class
SystemLoggerProperties
SystemLoggerProperties structure
class
TensorInfo
TensorInfo structure
class
class
ThermalAmbientParams
Ambient factors that affect the temperature measurement of a Thermal sensor.
class
ThermalConfig
ThermalConfig message. Currently unused.
class
class
ThermalGainMode
Thermal sensor gain mode. Use low gain in high energy environments.  Members:    LOW :     HIGH : 
class
class
ThermalProperties
Specify properties for Thermal
class
class
Timestamp
Timestamp structure
class
ToFConfig
ToFConfig message. Carries config for feature tracking algorithm
class
ToFProperties
Specify properties for ToF
class
TrackedFeature
TrackedFeature structure
class
TrackedFeatures
TrackedFeatures message. Carries position (X, Y) of tracked features and their ID.
class
TrackerIdAssignmentPolicy
Members:    UNIQUE_ID    SMALLEST_ID
class
TrackerType
Members:    SHORT_TERM_KCF : Kernelized Correlation Filter tracking    SHORT_TERM_IMAGELESS : Short term tracking without using image data    ZERO_TERM_IMAGELESS : Ability to track the objects without accessing image data.    ZERO_TERM_COLOR_HISTOGRAM : Tracking using image data too.
class
Tracklet
Tracklet structure  Contains tracklets from object tracker output.
class
Tracklets
Tracklets message. Carries object tracking information.
class
TransformData
TransformData message. Carries transform in x,y,z,qx,qy,qz,qw format.
class
UVCProperties
Properties for UVC node
class
UsbSpeed
Get USB Speed  Members:    UNKNOWN    LOW    FULL    HIGH    SUPER    SUPER_PLUS
class
class
class
class
class
class
class
Version
Version structure
class
VideoEncoderProperties
Specify properties for VideoEncoder such as profile, bitrate, ...
class
class
WarpProperties
Specify properties for Warp
class
XLinkConnection
Represents connection between host and device over XLink protocol
class
XLinkDeviceState
Members:    X_LINK_ANY_STATE    X_LINK_BOOTED    X_LINK_UNBOOTED    X_LINK_BOOTLOADER    X_LINK_FLASH_BOOTED    X_LINK_BOOTED_NON_EXCLUSIVE    X_LINK_GATE    X_LINK_GATE_BOOTED    X_LINK_GATE_SETUP
exception
class
XLinkError_t
Members:    X_LINK_SUCCESS    X_LINK_ALREADY_OPEN    X_LINK_COMMUNICATION_NOT_OPEN    X_LINK_COMMUNICATION_FAIL    X_LINK_COMMUNICATION_UNKNOWN_ERROR    X_LINK_DEVICE_NOT_FOUND    X_LINK_TIMEOUT    X_LINK_ERROR    X_LINK_OUT_OF_MEMORY    X_LINK_INSUFFICIENT_PERMISSIONS    X_LINK_DEVICE_ALREADY_IN_USE    X_LINK_NOT_IMPLEMENTED    X_LINK_INIT_USB_ERROR    X_LINK_INIT_TCP_IP_ERROR    X_LINK_INIT_PCIE_ERROR
class
XLinkPlatform
Members:    X_LINK_ANY_PLATFORM    X_LINK_MYRIAD_2    X_LINK_MYRIAD_X    X_LINK_RVC3    X_LINK_RVC4
class
XLinkProtocol
Members:    X_LINK_USB_VSC    X_LINK_USB_CDC    X_LINK_PCIE    X_LINK_TCP_IP    X_LINK_IPC    X_LINK_NMB_OF_PROTOCOLS    X_LINK_ANY_PROTOCOL
exception
exception
class
connectionInterface
Members:    USB    ETHERNET    WIFI
function
function
function
downloadModelsFromZoo(path: str, cacheDirectory: str = '', apiKey: str = '', progressFormat: str = 'none') -> bool: bool
Helper function allowing one to download all models specified in yaml files in the given path and store them in the cache directory  Parameter ``path:``:     Path to the directory containing yaml files  Parameter ``cacheDirectory:``:     Cache directory where the cached models are stored, default is "". If     cacheDirectory is set to "", this function checks the DEPTHAI_ZOO_CACHE_PATH     environment variable and uses that if set, otherwise the default is used     (see getDefaultCachePath).  Parameter ``apiKey:``:     API key for the model zoo, default is "". If apiKey is set to "", this     function checks the DEPTHAI_ZOO_API_KEY environment variable and uses that     if set. Otherwise, no API key is used.  Parameter ``progressFormat:``:     Format to use for progress output (possible values: pretty, json, none),     default is "pretty"  Returns:     bool: True if all models were downloaded successfully, false otherwise
function
getModelFromZoo(modelDescription: NNModelDescription, useCached: bool = True, cacheDirectory: str = '', apiKey: str = '', progressFormat: str = 'none') -> str: str
Get model from model zoo  Parameter ``modelDescription:``:     Model description  Parameter ``useCached:``:     Use cached model if present, default is true  Parameter ``cacheDirectory:``:     Cache directory where the cached models are stored, default is "". If     cacheDirectory is set to "", this function checks the DEPTHAI_ZOO_CACHE_PATH     environment variable and uses that if set, otherwise the default value is     used (see getDefaultCachePath).  Parameter ``apiKey:``:     API key for the model zoo, default is "". If apiKey is set to "", this     function checks the DEPTHAI_ZOO_API_KEY environment variable and uses that     if set. Otherwise, no API key is used.  Parameter ``progressFormat:``:     Format to use for progress output (possible values: pretty, json, none),     default is "pretty"  Returns:     std::string: Path to the model in cache
function
function
platform2string(arg0: Platform) -> str: str
Convert Platform enum to string  Parameter ``platform``:     Platform enum  Returns:     std::string String representation of Platform
function
readModelType(modelPath: str) -> ModelType: ModelType
Read model type from model path  Parameter ``modelPath``:     Path to model  Returns:     ModelType
function
string2platform(arg0: str) -> Platform: Platform
Convert string to Platform enum  Parameter ``platform``:     String representation of Platform  Returns:     Platform Platform enum
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
module

depthai.modelzoo

function
getDefaultCachePath() -> str: str
Get the default cache path (where models are cached)
function
getDefaultModelsPath() -> str: str
Get the default models path (where yaml files are stored)
function
getDownloadEndpoint() -> str: str
Get the download endpoint (for model querying)
function
getHealthEndpoint() -> str: str
Get the health endpoint (for internet check)
function
setDefaultCachePath(path: str)
Set the default cache path (where models are cached)  Parameter ``path``:
function
setDefaultModelsPath(path: str)
Set the default models path (where yaml files are stored)  Parameter ``path``:
function
setDownloadEndpoint(endpoint: str)
Set the download endpoint (for model querying)  Parameter ``endpoint``:
function
setHealthEndpoint(endpoint: str)
Set the health endpoint (for internet check)  Parameter ``endpoint``:
package

depthai.nn_archive

module
module

depthai.nn_archive.v1

class
Config
The main class of the multi/single-stage model config scheme (multi- stage models consists of interconnected single-stage models).  @type config_version: str @ivar config_version: String representing config schema version in format 'x.y' where x is major version and y is minor version @type model: Model @ivar model: A Model object representing the neural network used in the archive.
class
DataType
Data type of the input data (e.g., 'float32').  Represents all existing data types used in i/o streams of the model.  Precision of the model weights.  Data type of the output data (e.g., 'float32').  Members:    BOOLEAN    FLOAT16    FLOAT32    FLOAT64    INT4    INT8    INT16    INT32    INT64    UINT4    UINT8    UINT16    UINT32    UINT64    STRING
class
Head
Represents head of a model.  @type name: str | None @ivar name: Optional name of the head. @type parser: str @ivar parser: Name of the parser responsible for processing the models output. @type outputs: List[str] | None @ivar outputs: Specify which outputs are fed into the parser. If None, all outputs are fed. @type metadata: C{HeadMetadata} | C{HeadObjectDetectionMetadata} | C{HeadClassificationMetadata} | C{HeadObjectDetectionSSDMetadata} | C{HeadSegmentationMetadata} | C{HeadYOLOMetadata} @ivar metadata: Metadata of the parser.
class
Input
Represents input stream of a model.  @type name: str @ivar name: Name of the input layer.  @type dtype: DataType @ivar dtype: Data type of the input data (e.g., 'float32').  @type input_type: InputType @ivar input_type: Type of input data (e.g., 'image').  @type shape: list @ivar shape: Shape of the input data as a list of integers (e.g. [H,W], [H,W,C], [N,H,W,C], ...).  @type layout: str @ivar layout: Lettercode interpretation of the input data dimensions (e.g., 'NCHW').  @type preprocessing: PreprocessingBlock @ivar preprocessing: Preprocessing steps applied to the input data.
class
InputType
Members:    IMAGE    RAW
class
Metadata
Metadata of the parser.  Metadata for the object detection head.  @type classes: list @ivar classes: Names of object classes detected by the model. @type n_classes: int @ivar n_classes: Number of object classes detected by the model. @type iou_threshold: float @ivar iou_threshold: Non-max supression threshold limiting boxes intersection. @type conf_threshold: float @ivar conf_threshold: Confidence score threshold above which a detected object is considered valid. @type max_det: int @ivar max_det: Maximum detections per image. @type anchors: list @ivar anchors: Predefined bounding boxes of different sizes and aspect ratios. The innermost lists are length 2 tuples of box sizes. The middle lists are anchors for each output. The outmost lists go from smallest to largest output.  Metadata for the classification head.  @type classes: list @ivar classes: Names of object classes classified by the model. @type n_classes: int @ivar n_classes: Number of object classes classified by the model. @type is_softmax: bool @ivar is_softmax: True, if output is already softmaxed  Metadata for the SSD object detection head.  @type boxes_outputs: str @ivar boxes_outputs: Output name corresponding to predicted bounding box coordinates. @type scores_outputs: str @ivar scores_outputs: Output name corresponding to predicted bounding box confidence scores.  Metadata for the segmentation head.  @type classes: list @ivar classes: Names of object classes segmented by the model. @type n_classes: int @ivar n_classes: Number of object classes segmented by the model. @type is_softmax: bool @ivar is_softmax: True, if output is already softmaxed  Metadata for the YOLO head.  @type yolo_outputs: list @ivar yolo_outputs: A list of output names for each of the different YOLO grid sizes. @type mask_outputs: list | None @ivar mask_outputs: A list of output names for each mask output. @type protos_outputs: str | None @ivar protos_outputs: Output name for the protos. @type keypoints_outputs: list | None @ivar keypoints_outputs: A list of output names for the keypoints. @type angles_outputs: list | None @ivar angles_outputs: A list of output names for the angles. @type subtype: str @ivar subtype: YOLO family decoding subtype (e.g. yolov5, yolov6, yolov7 etc.) @type n_prototypes: int | None @ivar n_prototypes: Number of prototypes per bbox in YOLO instance segmnetation. @type n_keypoints: int | None @ivar n_keypoints: Number of keypoints per bbox in YOLO keypoint detection. @type is_softmax: bool | None @ivar is_softmax: True, if output is already softmaxed in YOLO instance segmentation  Metadata for the basic head. It allows you to specify additional fields.  @type postprocessor_path: str | None @ivar postprocessor_path: Path to the postprocessor.
class
MetadataClass
Metadata object defining the model metadata.  Represents metadata of a model.  @type name: str @ivar name: Name of the model. @type path: str @ivar path: Relative path to the model executable.
class
Model
A Model object representing the neural network used in the archive.  Class defining a single-stage model config scheme.  @type metadata: Metadata @ivar metadata: Metadata object defining the model metadata. @type inputs: list @ivar inputs: List of Input objects defining the model inputs. @type outputs: list @ivar outputs: List of Output objects defining the model outputs. @type heads: list @ivar heads: List of Head objects defining the model heads. If not defined, we assume a raw output.
class
Output
Represents output stream of a model.  @type name: str @ivar name: Name of the output layer. @type dtype: DataType @ivar dtype: Data type of the output data (e.g., 'float32').
class
PreprocessingBlock
Preprocessing steps applied to the input data.  Represents preprocessing operations applied to the input data.  @type mean: list | None @ivar mean: Mean values in channel order. Order depends on the order in which the model was trained on. @type scale: list | None @ivar scale: Standardization values in channel order. Order depends on the order in which the model was trained on. @type reverse_channels: bool | None @ivar reverse_channels: If True input to the model is RGB else BGR. @type interleaved_to_planar: bool | None @ivar interleaved_to_planar: If True input to the model is interleaved (NHWC) else planar (NCHW). @type dai_type: str | None @ivar dai_type: DepthAI input type which is read by DepthAI to automatically setup the pipeline.
class

depthai.nn_archive.v1.Config

method
property
configVersion
String representing config schema version in format 'x.y' where x is major version and y is minor version.
method
property
model
A Model object representing the neural network used in the archive.
method
class

depthai.nn_archive.v1.Head

method
property
metadata
Metadata of the parser.
method
property
name
Optional name of the head.
method
property
outputs
Specify which outputs are fed into the parser. If None, all outputs are fed.
method
property
parser
Name of the parser responsible for processing the models output.
method
class

depthai.nn_archive.v1.Input

method
property
dtype
Data type of the input data (e.g., 'float32').
method
property
inputType
Type of input data (e.g., 'image').
method
property
layout
Lettercode interpretation of the input data dimensions (e.g., 'NCHW')
method
property
name
Name of the input layer.
method
property
preprocessing
Preprocessing steps applied to the input data.
method
property
shape
Shape of the input data as a list of integers (e.g. [H,W], [H,W,C], [N,H,W,C], ...).
method
class

depthai.nn_archive.v1.Metadata

method
property
anchors
Predefined bounding boxes of different sizes and aspect ratios. The innermost lists are length 2 tuples of box sizes. The middle lists are anchors for each output. The outmost lists go from smallest to largest output.
method
property
anglesOutputs
A list of output names for the angles.
method
property
boxesOutputs
Output name corresponding to predicted bounding box coordinates.
method
property
classes
Names of object classes recognized by the model.
method
property
confThreshold
Confidence score threshold above which a detected object is considered valid.
method
property
extraParams
Additional parameters
method
property
iouThreshold
Non-max supression threshold limiting boxes intersection.
method
property
isSoftmax
True, if output is already softmaxed.  True, if output is already softmaxed in YOLO instance segmentation.
method
property
keypointsOutputs
A list of output names for the keypoints.
method
property
maskOutputs
A list of output names for each mask output.
method
property
maxDet
Maximum detections per image.
method
property
nClasses
Number of object classes recognized by the model.
method
property
nKeypoints
Number of keypoints per bbox in YOLO keypoint detection.
method
property
nPrototypes
Number of prototypes per bbox in YOLO instance segmnetation.
method
property
postprocessorPath
Path to the postprocessor.
method
property
protosOutputs
Output name for the protos.
method
property
scoresOutputs
Output name corresponding to predicted bounding box confidence scores.
method
property
subtype
YOLO family decoding subtype (e.g. yolov5, yolov6, yolov7 etc.).
method
property
yoloOutputs
A list of output names for each of the different YOLO grid sizes.
method
class

depthai.nn_archive.v1.MetadataClass

method
property
name
Name of the model.
method
property
path
Relative path to the model executable.
method
property
precision
Precision of the model weights.
method
class

depthai.nn_archive.v1.Model

method
property
heads
List of Head objects defining the model heads. If not defined, we assume a raw output.
method
property
inputs
List of Input objects defining the model inputs.
method
property
metadata
Metadata object defining the model metadata.
method
property
outputs
List of Output objects defining the model outputs.
method
class

depthai.nn_archive.v1.Output

method
property
dtype
Data type of the output data (e.g., 'float32').
method
property
layout
List of letters describing the output layout (e.g. 'NC').
method
property
name
Name of the output layer.
method
property
shape
Shape of the output as a list of integers (e.g. [1, 1000]).
method
class

depthai.nn_archive.v1.PreprocessingBlock

method
property
daiType
DepthAI input type which is read by DepthAI to automatically setup the pipeline.
method
property
interleavedToPlanar
If True input to the model is interleaved (NHWC) else planar (NCHW).
method
property
mean
Mean values in channel order. Order depends on the order in which the model was trained on.
method
property
reverseChannels
If True input to the model is RGB else BGR.
method
property
scale
Standardization values in channel order. Order depends on the order in which the model was trained on.
method
package

depthai.node

module
class
AprilTag
AprilTag node.
class
BasaltVIO
Basalt Visual Inertial Odometry node. Performs VIO on stereo images and IMU data.
class
class
class
class
ColorCamera
ColorCamera node. For use with color sensors.
class
DetectionNetwork
DetectionNetwork, base for different network specializations
class
DetectionParser
DetectionParser node. Parses detection results from different neural networks and is being used internally by MobileNetDetectionNetwork and YoloDetectionNetwork.
class
EdgeDetector
EdgeDetector node. Performs edge detection using 3x3 Sobel filter
class
FeatureTracker
FeatureTracker node. Performs feature tracking and reidentification using motion estimation between 2 consecutive frames.
class
class
IMU
IMU node for BNO08X.
class
ImageAlign
ImageAlign node. Calculates spatial location data on a set of ROIs on depth map.
class
ImageManip
ImageManip node. Capability to crop, resize, warp, ... incoming image frames
class
class
MonoCamera
MonoCamera node. For use with grayscale sensors.
class
NeuralNetwork
NeuralNetwork node. Runs a neural inference on input data.
class
ObjectTracker
ObjectTracker node. Performs object tracking using Kalman filter and hungarian algorithm.
class
PointCloud
PointCloud node. Computes point cloud from depth frames.
class
RGBD
RGBD node. Combines depth and color frames into a single point cloud.
class
RTABMapSLAM
RTABMap SLAM node. Performs SLAM on given odometry pose, rectified frame and depth frame.
class
RTABMapVIO
RTABMap Visual Inertial Odometry node. Performs VIO on rectified frame, depth frame and IMU data.
class
RecordMetadataOnly
RecordMetadataOnly node, used to record a source stream to a file
class
RecordVideo
RecordVideo node, used to record a video source stream to a file
class
ReplayMetadataOnly
Replay node, used to replay a file to a source node
class
ReplayVideo
Replay node, used to replay a file to a source node
class
SPIIn
SPIIn node. Receives messages over SPI.
class
SPIOut
SPIOut node. Sends messages over SPI.
class
class
SpatialDetectionNetwork
SpatialDetectionNetwork node. Runs a neural inference on input image and calculates spatial location data.
class
SpatialLocationCalculator
SpatialLocationCalculator node. Calculates spatial location data on a set of ROIs on depth map.
class
StereoDepth
StereoDepth node. Compute stereo disparity and depth from left-right image pair.
class
Sync
Sync node. Performs syncing between image frames
class
SystemLogger
SystemLogger node. Send system information periodically.
class
Thermal
Thermal node.
class
class
ToF
ToF node. Performs feature tracking and reidentification using motion estimation between 2 consecutive frames.
class
UVC
UVC (USB Video Class) node
class
VideoEncoder
VideoEncoder node. Encodes frames into MJPEG, H264 or H265.
class
Warp
Warp node. Capability to crop, resize, warp, ... incoming image frames
class

depthai.node.AprilTag(depthai.DeviceNode)

method
getNumThreads(self) -> int: int
Get number of threads to use for AprilTag detection.  Returns:     Number of threads to use.
method
getWaitForConfigInput(self) -> bool: bool
Get whether or not wait until configuration message arrives to inputConfig Input.
method
runOnHost(self) -> bool: bool
Check if the node is set to run on host
method
setNumThreads(self, numThreads: int)
Set number of threads to use for AprilTag detection.  Parameter ``numThreads``:     Number of threads to use.
method
setRunOnHost(self, arg0: bool)
Specify whether to run on host or device By default, the node will run on device.
method
setWaitForConfigInput(self, wait: bool)
Specify whether or not wait until configuration message arrives to inputConfig Input.  Parameter ``wait``:     True to wait for configuration message, false otherwise.
property
initialConfig
Initial config to use when calculating spatial location data.
property
inputConfig
Input AprilTagConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
inputImage
Input message with depth data used to retrieve spatial information about detected object. Default queue is non-blocking with size 4.
property
out
Outputs AprilTags message that carries spatial location results.
property
passthroughInputImage
Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
class

depthai.node.BasaltVIO(depthai.node.ThreadedHostNode)

variable
method
method
method
property
imu
Input IMU data.
property
property
passthrough
Output passthrough of left image.
property
property
transform
Output transform data.
class

depthai.node.BenchmarkIn(depthai.DeviceNode)

method
method
method
sendReportEveryNMessages(self, num: int)
Specify how many messages to measure for each report
method
setRunOnHost(self, runOnHost: bool)
Specify whether to run on host or device By default, the node will run on device.
property
input
Receive messages as fast as possible
property
passthrough
Passthrough for input messages (so the node can be placed between other nodes)
property
report
Send a benchmark report when the set number of messages are received
class

depthai.node.BenchmarkOut(depthai.DeviceNode)

method
setFps(self, fps: float)
Set FPS at which the node is sending out messages. 0 means as fast as possible
method
setNumMessagesToSend(self, num: int)
Sets number of messages to send, by default send messages indefinitely  Parameter ``num``:     number of messages to send
method
setRunOnHost(self, runOnHost: bool)
Specify whether to run on host or device By default, the node will run on device.
property
input
Message that will be sent repeatedly
property
out
Send messages out as fast as possible
class

depthai.node.Camera(depthai.DeviceNode)

method
method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocket
Retrieves which board socket to use  Returns:     Board socket to use
method
requestFullResolutionOutput(self, type: depthai.ImgFrame.Type | None = None, fps: float | None = None, useHighestResolution: bool = False) -> depthai.Node.Output: depthai.Node.Output
Get a high resolution output with full FOV on the sensor. By default the function will not use the resolutions higher than 5000x4000, as those often need a lot of resources, making them hard to use in combination with other nodes.  Parameter ``type``:     Type of the output (NV12, BGR, ...) - by default it's auto-selected for best     performance  Parameter ``fps``:     FPS of the output - by default it's auto-selected to highest possible that a     sensor config support or 30, whichever is lower  Parameter ``useHighestResolution``:     If true, the function will use the highest resolution available on the     sensor, even if it's higher than 5000x4000
method
method
setMockIsp(self, mockIsp: ReplayVideo) -> Camera: Camera
Set mock ISP for Camera node. Automatically sets mockIsp size.  Parameter ``replay``:     ReplayVideo node to use as mock ISP
property
initialControl
Initial control options to apply to sensor
property
inputControl
Input for CameraControl message, which can modify camera parameters in runtime
property
mockIsp
Input for mocking 'isp' functionality on RVC2. Default queue is blocking with size 8
property
raw
Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.  Captured directly from the camera sensor, and the source for the 'isp' output.
class

depthai.node.ColorCamera(depthai.DeviceNode)

method
method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocket
Retrieves which board socket to use  Returns:     Board socket to use
method
method
getCamera(self) -> str: str
Retrieves which camera to use by name  Returns:     Name of the camera to use
method
method
getFp16(self) -> bool: bool
Get fp16 (0..255) data of preview output frames
method
getFps(self) -> float: float
Get rate at which camera should produce frames  Returns:     Rate in frames per second
method
method
method
getInterleaved(self) -> bool: bool
Get planar or interleaved data of preview output frames
method
getIspHeight(self) -> int: int
Get 'isp' output height
method
getIspNumFramesPool(self) -> int: int
Get number of frames in isp pool
method
getIspSize(self) -> tuple[int, int]: tuple[int, int]
Get 'isp' output resolution as size, after scaling
method
getIspWidth(self) -> int: int
Get 'isp' output width
method
method
getPreviewKeepAspectRatio(self) -> bool: bool
See also:     setPreviewKeepAspectRatio  Returns:     Preview keep aspect ratio option
method
getPreviewNumFramesPool(self) -> int: int
Get number of frames in preview pool
method
method
method
getRawNumFramesPool(self) -> int: int
Get number of frames in raw pool
method
method
getResolutionHeight(self) -> int: int
Get sensor resolution height
method
method
getResolutionWidth(self) -> int: int
Get sensor resolution width
method
method
getSensorCropX(self) -> float: float
Get sensor top left x crop coordinate
method
getSensorCropY(self) -> float: float
Get sensor top left y crop coordinate
method
method
getStillNumFramesPool(self) -> int: int
Get number of frames in still pool
method
method
method
method
getVideoNumFramesPool(self) -> int: int
Get number of frames in video pool
method
method
method
sensorCenterCrop(self)
Specify sensor center crop. Resolution size / video size
method
setBoardSocket(self, boardSocket: depthai.CameraBoardSocket)
Specify which board socket to use  Parameter ``boardSocket``:     Board socket to use
method
method
setCamera(self, name: str)
Specify which camera to use by name  Parameter ``name``:     Name of the camera to use
method
setColorOrder(self, colorOrder: depthai.ColorCameraProperties.ColorOrder)
Set color order of preview output images. RGB or BGR
method
setFp16(self, fp16: bool)
Set fp16 (0..255) data type of preview output frames
method
setFps(self, fps: float)
Set rate at which camera should produce frames  Parameter ``fps``:     Rate in frames per second
method
method
method
setInterleaved(self, interleaved: bool)
Set planar or interleaved data of preview output frames
method
setIsp3aFps(self, arg0: int)
Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.
method
setIspNumFramesPool(self, arg0: int)
Set number of frames in isp pool
method
method
method
setPreviewKeepAspectRatio(self, keep: bool)
Specifies whether preview output should preserve aspect ratio, after downscaling from video size or not.  Parameter ``keep``:     If true, a larger crop region will be considered to still be able to create     the final image in the specified aspect ratio. Otherwise video size is     resized to fit preview size
method
setPreviewNumFramesPool(self, arg0: int)
Set number of frames in preview pool
method
method
setRawNumFramesPool(self, arg0: int)
Set number of frames in raw pool
method
setRawOutputPacked(self, packed: bool)
Configures whether the camera `raw` frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (`false`), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000'00pp'pppp'pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.
method
method
setSensorCrop(self, x: float, y: float)
Specifies the cropping that happens when converting ISP to video output. By default, video will be center cropped from the ISP output. Note that this doesn't actually do on-sensor cropping (and MIPI-stream only that region), but it does postprocessing on the ISP (on RVC).  Parameter ``x``:     Top left X coordinate  Parameter ``y``:     Top left Y coordinate
method
setStillNumFramesPool(self, arg0: int)
Set number of frames in preview pool
method
method
setVideoNumFramesPool(self, arg0: int)
Set number of frames in preview pool
method
property
frameEvent
Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.  It's sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp
property
initialControl
Initial control options to apply to sensor
property
inputControl
Input for CameraControl message, which can modify camera parameters in runtime
property
isp
Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data.  Generated by the ISP engine, and the source for the 'video', 'preview' and 'still' outputs
property
preview
Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame data.  Suitable for use with NeuralNetwork node
property
raw
Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.  Captured directly from the camera sensor, and the source for the 'isp' output.
property
still
Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.  The message is sent only when a CameraControl message arrives to inputControl with captureStill command set.
property
video
Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.  Suitable for use with VideoEncoder node
class

depthai.node.DetectionNetwork(depthai.DeviceNodeGroup)

method
method
method
method
getConfidenceThreshold(self) -> float: float
Retrieves threshold at which to filter the rest of the detections.  Returns:     Detection confidence
method
getNumInferenceThreads(self) -> int: int
How many inference threads will be used to run the network  Returns:     Number of threads, 0, 1 or 2. Zero means AUTO
method
setBackend(self, setBackend: str)
Specifies backend to use  Parameter ``backend``:     String specifying backend to use
method
setBackendProperties(self, setBackendProperties: dict [ str , str ])
Set backend properties  Parameter ``backendProperties``:     backend properties map
method
method
setBlobPath(self, path: Path)
Load network blob into assets and use once pipeline is started.  Throws:     Error if file doesn't exist or isn't a valid network blob.  Parameter ``path``:     Path to network blob
method
setConfidenceThreshold(self, thresh: float)
Specifies confidence threshold at which to filter the rest of the detections.  Parameter ``thresh``:     Detection confidence must be greater than specified threshold to be added to     the list
method
setFromModelZoo(self, description: depthai.NNModelDescription, useCached: bool = False)
Download model from zoo and set it for this Node  Parameter ``description:``:     Model description to download  Parameter ``useCached:``:     Use cached model if available
method
setModelPath(self, modelPath: Path)
Load network model into assets.  Parameter ``modelPath``:     Path to the model file.
method
method
setNumInferenceThreads(self, numThreads: int)
How many threads should the node use to run the network.  Parameter ``numThreads``:     Number of threads to dedicate to this node
method
setNumNCEPerInferenceThread(self, numNCEPerThread: int)
How many Neural Compute Engines should a single thread use for inference  Parameter ``numNCEPerThread``:     Number of NCE per thread
method
setNumPoolFrames(self, numFrames: int)
Specifies how many frames will be available in the pool  Parameter ``numFrames``:     How many frames will pool have
method
setNumShavesPerInferenceThread(self, numShavesPerInferenceThread: int)
How many Shaves should a single thread use for inference  Parameter ``numShavesPerThread``:     Number of shaves per thread
property
input
Input message with data to be inferred upon
property
out
Outputs ImgDetections message that carries parsed detection results. Overrides NeuralNetwork 'out' with ImgDetections output message type.
property
outNetwork
Outputs unparsed inference results.
property
passthrough
Passthrough message on which the inference was performed.  Suitable for when input queue is set to non-blocking behavior.
class

depthai.node.DetectionParser(depthai.DeviceNode)

method
build(self, arg0: depthai.Node.Output, arg1: depthai.NNArchive) -> DetectionParser: DetectionParser
Build DetectionParser node. Connect output to this node's input. Also call setNNArchive() with provided NNArchive.  Parameter ``nnInput:``:     Output to link  Parameter ``nnArchive:``:     Neural network archive
method
method
method
getConfidenceThreshold(self) -> float: float
Retrieves threshold at which to filter the rest of the detections.  Returns:     Detection confidence
method
method
method
method
method
getNumFramesPool(self) -> int: int
Returns number of frames in pool
method
method
method
method
setBlobPath(self, path: Path)
Load network blob into assets and use once pipeline is started.  Throws:     Error if file doesn't exist or isn't a valid network blob.  Parameter ``path``:     Path to network blob
method
setConfidenceThreshold(self, thresh: float)
Specifies confidence threshold at which to filter the rest of the detections.  Parameter ``thresh``:     Detection confidence must be greater than specified threshold to be added to     the list
method
method
method
method
setNNArchive(self, nnArchive: depthai.NNArchive)
Set NNArchive for this Node. If the archive's type is SUPERBLOB, use default number of shaves.  Parameter ``nnArchive:``:     NNArchive to set
method
method
method
setNumFramesPool(self, numFramesPool: int)
Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
property
input
Input NN results with detection data to parse Default queue is blocking with size 5
property
out
Outputs image frame with detected edges
class

depthai.node.EdgeDetector(depthai.DeviceNode)

method
setMaxOutputFrameSize(self, arg0: int)
Specify maximum size of output image.  Parameter ``maxFrameSize``:     Maximum frame size in bytes
method
setNumFramesPool(self, arg0: int)
Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
property
initialConfig
Initial config to use for edge detection.
property
inputConfig
Input EdgeDetectorConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
inputImage
Input image on which edge detection is performed. Default queue is non-blocking with size 4.
property
outputImage
Outputs image frame with detected edges
class

depthai.node.FeatureTracker(depthai.DeviceNode)

method
setHardwareResources(self, numShaves: int, numMemorySlices: int)
Specify allocated hardware resources for feature tracking. 2 shaves/memory slices are required for optical flow, 1 for corner detection only.  Parameter ``numShaves``:     Number of shaves. Maximum 2.  Parameter ``numMemorySlices``:     Number of memory slices. Maximum 2.
property
initialConfig
Initial config to use for feature tracking.
property
inputConfig
Input FeatureTrackerConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
inputImage
Input message with frame data on which feature tracking is performed. Default queue is non-blocking with size 4.
property
outputFeatures
Outputs TrackedFeatures message that carries tracked features results.
property
passthroughInputImage
Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
class

depthai.node.HostNode(depthai.node.ThreadedHostNode)

CLASS_METHOD
method
method
method
method
method
method
method
method
sendProcessingToPipeline(self, arg0: bool)
Send processing to pipeline. If set to true, it's important to call `pipeline.run()` in the main thread or `pipeline.processTasks()` in the main thread. Otherwise, if set to false, such action is not needed.
property
property
class

depthai.node.IMU(depthai.DeviceNode)

method
method
method
getBatchReportThreshold(self) -> int: int
Above this packet threshold data will be sent to host, if queue is not blocked
method
getMaxBatchReports(self) -> int: int
Maximum number of IMU packets in a batch report
method
setBatchReportThreshold(self, batchReportThreshold: int)
Above this packet threshold data will be sent to host, if queue is not blocked
method
setMaxBatchReports(self, maxBatchReports: int)
Maximum number of IMU packets in a batch report
property
mockIn
Mock IMU data for replaying recorded data
property
out
Outputs IMUData message that carries IMU packets.
class

depthai.node.ImageAlign(depthai.DeviceNode)

method
method
method
setNumShaves(self, numShaves: int) -> ImageAlign: ImageAlign
Specify number of shaves to use for this node
method
setOutKeepAspectRatio(self, keep: bool) -> ImageAlign: ImageAlign
Specify whether to keep aspect ratio when resizing
method
property
initialConfig
Initial config to use when calculating spatial location data.
property
input
Input message. Default queue is non-blocking with size 4.
property
inputAlignTo
Input align to message. Default queue is non-blocking with size 1.
property
inputConfig
Input message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
outputAligned
Outputs ImgFrame message that is aligned to inputAlignTo.
property
passthroughInput
Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
class

depthai.node.ImageManip(depthai.DeviceNode)

class
Backend
Members:    HW    CPU
class
PerformanceMode
Members:    BALANCED    PERFORMANCE    LOW_POWER
method
setBackend(self, arg0: ImageManip.Backend) -> ImageManip: ImageManip
Set CPU as backend preference  Parameter ``backend``:     Backend preference
method
setMaxOutputFrameSize(self, arg0: int)
Specify maximum size of output image.  Parameter ``maxFrameSize``:     Maximum frame size in bytes
method
setNumFramesPool(self, arg0: int)
Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
method
setPerformanceMode(self, arg0: ImageManip.PerformanceMode) -> ImageManip: ImageManip
Set performance mode  Parameter ``performanceMode``:     Performance mode
method
setRunOnHost(self, arg0: bool) -> ImageManip: ImageManip
Specify whether to run on host or device  Parameter ``runOnHost``:     Run node on host
property
initialConfig
Initial config to use when manipulating frames
property
inputConfig
Input ImageManipConfig message with ability to modify parameters in runtime
property
inputImage
Input image to be modified
property
class

depthai.node.MessageDemux(depthai.DeviceNode)

property
input
Input message of type MessageGroup
property
outputs
A map of outputs, where keys are same as in the input MessageGroup
class

depthai.node.MonoCamera(depthai.DeviceNode)

method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocket
Retrieves which board socket to use  Returns:     Board socket to use
method
method
getCamera(self) -> str: str
Retrieves which camera to use by name  Returns:     Name of the camera to use
method
getFps(self) -> float: float
Get rate at which camera should produce frames  Returns:     Rate in frames per second
method
method
method
getNumFramesPool(self) -> int: int
Get number of frames in main (ISP output) pool
method
getRawNumFramesPool(self) -> int: int
Get number of frames in raw pool
method
method
getResolutionHeight(self) -> int: int
Get sensor resolution height
method
method
getResolutionWidth(self) -> int: int
Get sensor resolution width
method
setBoardSocket(self, boardSocket: depthai.CameraBoardSocket)
Specify which board socket to use  Parameter ``boardSocket``:     Board socket to use
method
method
setCamera(self, name: str)
Specify which camera to use by name  Parameter ``name``:     Name of the camera to use
method
setFps(self, fps: float)
Set rate at which camera should produce frames  Parameter ``fps``:     Rate in frames per second
method
method
method
setIsp3aFps(self, arg0: int)
Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.
method
setNumFramesPool(self, arg0: int)
Set number of frames in main (ISP output) pool
method
setRawNumFramesPool(self, arg0: int)
Set number of frames in raw pool
method
setRawOutputPacked(self, packed: bool)
Configures whether the camera `raw` frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (`false`), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000'00pp'pppp'pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.
method
property
property
initialControl
Initial control options to apply to sensor
property
property
property
class

depthai.node.NeuralNetwork(depthai.DeviceNode)

method
method
getNumInferenceThreads(self) -> int: int
How many inference threads will be used to run the network  Returns:     Number of threads, 0, 1 or 2. Zero means AUTO
method
setBackend(self, setBackend: str)
Specifies backend to use  Parameter ``backend``:     String specifying backend to use
method
setBackendProperties(self, setBackendProperties: dict [ str , str ])
Set backend properties  Parameter ``backendProperties``:     backend properties map
method
method
setBlobPath(self, path: Path)
Load network blob into assets and use once pipeline is started.  Throws:     Error if file doesn't exist or isn't a valid network blob.  Parameter ``path``:     Path to network blob
method
setFromModelZoo(self, description: depthai.NNModelDescription, useCached: bool)
Download model from zoo and set it for this Node  Parameter ``description:``:     Model description to download  Parameter ``useCached:``:     Use cached model if available
method
setModelPath(self, modelPath: Path)
Load network xml and bin files into assets.  Parameter ``xmlModelPath``:     Path to the neural network model file.
method
method
setNumInferenceThreads(self, numThreads: int)
How many threads should the node use to run the network.  Parameter ``numThreads``:     Number of threads to dedicate to this node
method
setNumNCEPerInferenceThread(self, numNCEPerThread: int)
How many Neural Compute Engines should a single thread use for inference  Parameter ``numNCEPerThread``:     Number of NCE per thread
method
setNumPoolFrames(self, numFrames: int)
Specifies how many frames will be available in the pool  Parameter ``numFrames``:     How many frames will pool have
method
setNumShavesPerInferenceThread(self, numShavesPerInferenceThread: int)
How many Shaves should a single thread use for inference  Parameter ``numShavesPerThread``:     Number of shaves per thread
property
input
Input message with data to be inferred upon
property
inputs
Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1 and waits for messages
property
out
Outputs NNData message that carries inference results
property
passthrough
Passthrough message on which the inference was performed.  Suitable for when input queue is set to non-blocking behavior.
property
passthroughs
Passthroughs which correspond to specified input
class

depthai.node.ObjectTracker(depthai.DeviceNode)

method
setDetectionLabelsToTrack(self, labels: list [ int ])
Specify detection labels to track.  Parameter ``labels``:     Detection labels to track. Default every label is tracked from image     detection network output.
method
setMaxObjectsToTrack(self, maxObjectsToTrack: int)
Specify maximum number of object to track.  Parameter ``maxObjectsToTrack``:     Maximum number of object to track. Maximum 60 in case of SHORT_TERM_KCF,     otherwise 1000.
method
setTrackerIdAssignmentPolicy(self, type: depthai.TrackerIdAssignmentPolicy)
Specify tracker ID assignment policy.  Parameter ``type``:     Tracker ID assignment policy.
method
setTrackerThreshold(self, threshold: float)
Specify tracker threshold.  Parameter ``threshold``:     Above this threshold the detected objects will be tracked. Default 0, all     image detections are tracked.
method
setTrackerType(self, type: depthai.TrackerType)
Specify tracker type algorithm.  Parameter ``type``:     Tracker type.
method
setTrackingPerClass(self, trackingPerClass: bool)
Whether tracker should take into consideration class label for tracking.
property
inputConfig
Input ObjectTrackerConfig message with ability to modify parameters at runtime. Default queue is non-blocking with size 4.
property
inputDetectionFrame
Input ImgFrame message on which object detection was performed. Default queue is non-blocking with size 4.
property
inputDetections
Input message with image detection from neural network. Default queue is non- blocking with size 4.
property
inputTrackerFrame
Input ImgFrame message on which tracking will be performed. RGBp, BGRp, NV12, YUV420p types are supported. Default queue is non-blocking with size 4.
property
out
Outputs Tracklets message that carries object tracking results.
property
passthroughDetectionFrame
Passthrough ImgFrame message on which object detection was performed. Suitable for when input queue is set to non-blocking behavior.
property
passthroughDetections
Passthrough image detections message from neural network output. Suitable for when input queue is set to non-blocking behavior.
property
passthroughTrackerFrame
Passthrough ImgFrame message on which tracking was performed. Suitable for when input queue is set to non-blocking behavior.
class

depthai.node.PointCloud(depthai.DeviceNode)

method
setNumFramesPool(self, arg0: int)
Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
property
initialConfig
Initial config to use when computing the point cloud.
property
inputConfig
Input PointCloudConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
inputDepth
Input message with depth data used to create the point cloud. Default queue is non-blocking with size 4.
property
outputPointCloud
Outputs PointCloudData message
property
passthroughDepth
Passthrough depth from which the point cloud was calculated. Suitable for when input queue is set to non-blocking behavior.
class

depthai.node.RGBD(depthai.node.ThreadedHostNode)

method
method
printDevices(self)
Print available GPU devices
method
method
useCPU(self)
Use single-threaded CPU for processing
method
useCPUMT(self, numThreads: int = 2)
Use multi-threaded CPU for processing  Parameter ``numThreads``:     Number of threads to use
method
useGPU(self, device: int = 0)
Use GPU for processing (needs to be compiled with Kompute support)  Parameter ``device``:     GPU device index
property
property
property
pcl
Output point cloud.
property
rgbd
Output RGBD frames.
class

depthai.node.RTABMapSLAM(depthai.node.ThreadedHostNode)

method
method
method
setAlphaScaling(self, alpha: float)
Set the alpha scaling factor for the camera model.
method
setDatabasePath(self, path: str)
Set RTABMap database path. "/tmp/rtabmap.tmp.db" by default.
method
setFreq(self, f: float)
Set the frequency at which the node processes data. 1Hz by default.
method
setLoadDatabaseOnStart(self, load: bool)
Whether to load the database on start. False by default.
method
method
method
setPublishGrid(self, publish: bool)
Whether to publish the ground point cloud. True by default.
method
setPublishGroundCloud(self, publish: bool)
Whether to publish the ground point cloud. True by default.
method
setPublishObstacleCloud(self, publish: bool)
Whether to publish the obstacle point cloud. True by default.
method
setSaveDatabaseOnClose(self, save: bool)
Whether to save the database on close. False by default.
method
setSaveDatabasePeriod(self, period: float)
Set the interval at which the database is saved. 30.0s by default.
method
setSaveDatabasePeriodically(self, save: bool)
Whether to save the database periodically. False by default.
method
setUseFeatures(self, useFeatures: bool)
Whether to use input features for SLAM. False by default.
method
triggerNewMap(self)
Trigger a new map.
property
property
features
Input tracked features on which SLAM is performed (optional).
property
groundPCL
Output ground point cloud.
property
obstaclePCL
Output obstacle point cloud.
property
occupancyGridMap
Output occupancy grid map.
property
odom
Input odometry pose.
property
odomCorrection
Output odometry correction (map to odom).
property
passthroughDepth
Output passthrough depth image.
property
passthroughFeatures
Output passthrough features.
property
passthroughOdom
Output passthrough odometry pose.
property
passthroughRect
Output passthrough rectified image.
property
property
transform
Output transform.
class

depthai.node.RTABMapVIO(depthai.node.ThreadedHostNode)

method
method
method
method
setUseFeatures(self, useFeatures: bool)
Whether to use input features or calculate them internally.
property
property
features
Input tracked features on which VIO is performed (optional).
property
imu
Input IMU data.
property
passthroughDepth
Passthrough depth frame.
property
passthroughFeatures
Passthrough features.
property
passthroughRect
Passthrough rectified frame.
property
property
transform
Output transform.
class

depthai.node.RecordMetadataOnly(depthai.node.ThreadedHostNode)

class

depthai.node.ReplayMetadataOnly(depthai.node.ThreadedHostNode)

class

depthai.node.SPIIn(depthai.DeviceNode)

method
method
getMaxDataSize(self) -> int: int
Get maximum messages size in bytes
method
getNumFrames(self) -> int: int
Get number of frames in pool
method
method
setBusId(self, id: int)
Specifies SPI Bus number to use  Parameter ``id``:     SPI Bus id
method
setMaxDataSize(self, maxDataSize: int)
Set maximum message size it can receive  Parameter ``maxDataSize``:     Maximum size in bytes
method
setNumFrames(self, numFrames: int)
Set number of frames in pool for sending messages forward  Parameter ``numFrames``:     Maximum number of frames in pool
method
setStreamName(self, name: str)
Specifies stream name over which the node will receive data  Parameter ``name``:     Stream name
property
out
Outputs message of same type as send from host.
class

depthai.node.SPIOut(depthai.DeviceNode)

method
setBusId(self, id: int)
Specifies SPI Bus number to use  Parameter ``id``:     SPI Bus id
method
setStreamName(self, name: str)
Specifies stream name over which the node will send data  Parameter ``name``:     Stream name
property
input
Input for any type of messages to be transferred over SPI stream Default queue is blocking with size 8
class

depthai.node.Script(depthai.DeviceNode)

method
getProcessor(self) -> depthai.ProcessorType: depthai.ProcessorType
Get on which processor the script should run  Returns:     Processor type - Leon CSS or Leon MSS
method
getScriptName(self) -> str: str
Get the script name in utf-8.  When name set with setScript() or setScriptPath(), returns that name. When script loaded with setScriptPath() with name not provided, returns the utf-8 string of that path. Otherwise, returns "<script>"  Returns:     std::string of script name in utf-8
method
setProcessor(self, arg0: depthai.ProcessorType)
Set on which processor the script should run  Parameter ``type``:     Processor type - Leon CSS or Leon MSS
method
method
property
property
class

depthai.node.SpatialDetectionNetwork(depthai.DeviceNode)

method
method
method
getConfidenceThreshold(self) -> float: float
Retrieves threshold at which to filter the rest of the detections.  Returns:     Detection confidence
method
getNumInferenceThreads(self) -> int: int
How many inference threads will be used to run the network  Returns:     Number of threads, 0, 1 or 2. Zero means AUTO
method
setBackend(self, setBackend: str)
Specifies backend to use  Parameter ``backend``:     String specifying backend to use
method
setBackendProperties(self, setBackendProperties: dict [ str , str ])
Set backend properties  Parameter ``backendProperties``:     backend properties map
method
method
setBlobPath(self, path: Path)
Load network blob into assets and use once pipeline is started.  Throws:     Error if file doesn't exist or isn't a valid network blob.  Parameter ``path``:     Path to network blob
method
setBoundingBoxScaleFactor(self, scaleFactor: float)
Custom interface  Specifies scale factor for detected bounding boxes.  Parameter ``scaleFactor``:     Scale factor must be in the interval (0,1].
method
setConfidenceThreshold(self, thresh: float)
Specifies confidence threshold at which to filter the rest of the detections.  Parameter ``thresh``:     Detection confidence must be greater than specified threshold to be added to     the list
method
setDepthLowerThreshold(self, lowerThreshold: int)
Specifies lower threshold in depth units (millimeter by default) for depth values which will used to calculate spatial data  Parameter ``lowerThreshold``:     LowerThreshold must be in the interval [0,upperThreshold] and less than     upperThreshold.
method
setDepthUpperThreshold(self, upperThreshold: int)
Specifies upper threshold in depth units (millimeter by default) for depth values which will used to calculate spatial data  Parameter ``upperThreshold``:     UpperThreshold must be in the interval (lowerThreshold,65535].
method
setFromModelZoo(self, description: depthai.NNModelDescription, useCached: bool)
Download model from zoo and set it for this Node  Parameter ``description:``:     Model description to download  Parameter ``useCached:``:     Use cached model if available
method
setModelPath(self, modelPath: Path)
Load network file into assets.  Parameter ``modelPath``:     Path to the model file.
method
method
setNumInferenceThreads(self, numThreads: int)
How many threads should the node use to run the network.  Parameter ``numThreads``:     Number of threads to dedicate to this node
method
setNumNCEPerInferenceThread(self, numNCEPerThread: int)
How many Neural Compute Engines should a single thread use for inference  Parameter ``numNCEPerThread``:     Number of NCE per thread
method
setNumPoolFrames(self, numFrames: int)
Specifies how many frames will be available in the pool  Parameter ``numFrames``:     How many frames will pool have
method
setNumShavesPerInferenceThread(self, numShavesPerInferenceThread: int)
How many Shaves should a single thread use for inference  Parameter ``numShavesPerThread``:     Number of shaves per thread
method
setSpatialCalculationAlgorithm(self, calculationAlgorithm: depthai.SpatialLocationCalculatorAlgorithm)
Specifies spatial location calculator algorithm: Average/Min/Max  Parameter ``calculationAlgorithm``:     Calculation algorithm.
property
boundingBoxMapping
Outputs mapping of detected bounding boxes relative to depth map Suitable for when displaying remapped bounding boxes on depth frame
property
input
Input message with data to be inferred upon
property
inputDepth
Input message with depth data used to retrieve spatial information about detected object Default queue is non-blocking with size 4
property
out
Outputs ImgDetections message that carries parsed detection results.
property
outNetwork
Outputs unparsed inference results.
property
passthrough
Passthrough message on which the inference was performed.  Suitable for when input queue is set to non-blocking behavior.
property
passthroughDepth
Passthrough message for depth frame on which the spatial location calculation was performed. Suitable for when input queue is set to non-blocking behavior.
property
spatialLocationCalculatorOutput
Output of SpatialLocationCalculator node, which is used internally by SpatialDetectionNetwork. Suitable when extra information is required from SpatialLocationCalculator node, e.g. minimum, maximum distance.
class

depthai.node.SpatialLocationCalculator(depthai.DeviceNode)

property
initialConfig
Initial config to use when calculating spatial location data.
property
inputConfig
Input SpatialLocationCalculatorConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
inputDepth
Input message with depth data used to retrieve spatial information about detected object. Default queue is non-blocking with size 4.
property
out
Outputs SpatialLocationCalculatorData message that carries spatial location results.
property
passthroughDepth
Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
class

depthai.node.StereoDepth(depthai.DeviceNode)

class
PresetMode
Preset modes for stereo depth.  Members:    FAST_ACCURACY    FAST_DENSITY    DEFAULT    FACE    HIGH_DETAIL    ROBOTICS
method
method
method
enableDistortionCorrection(self, arg0: bool)
Equivalent to useHomographyRectification(!enableDistortionCorrection)
method
loadMeshData()
Specify mesh calibration data for 'left' and 'right' inputs, as vectors of bytes. Overrides useHomographyRectification behavior. See `loadMeshFiles` for the expected data format
method
loadMeshFiles(self, pathLeft: Path, pathRight: Path)
Specify local filesystem paths to the mesh calibration files for 'left' and 'right' inputs.  When a mesh calibration is set, it overrides the camera intrinsics/extrinsics matrices. Overrides useHomographyRectification behavior. Mesh format: a sequence of (y,x) points as 'float' with coordinates from the input image to be mapped in the output. The mesh can be subsampled, configured by `setMeshStep`.  With a 1280x800 resolution and the default (16,16) step, the required mesh size is:  width: 1280 / 16 + 1 = 81  height: 800 / 16 + 1 = 51
method
setAlphaScaling(self, arg0: float)
Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). On some high distortion lenses, and/or due to rectification (image rotated) invalid areas may appear even with alpha=0, in these cases alpha < 0.0 helps removing invalid areas. See getOptimalNewCameraMatrix from opencv for more details.
method
setBaseline(self, arg0: float)
Override baseline from calibration. Used only in disparity to depth conversion. Units are centimeters.
method
setDefaultProfilePreset(self, arg0: StereoDepth.PresetMode)
Sets a default preset based on specified option.  Parameter ``mode``:     Stereo depth preset mode
method
method
setDepthAlignmentUseSpecTranslation(self, arg0: bool)
Use baseline information for depth alignment from specs (design data) or from calibration. Default: true
method
setDisparityToDepthUseSpecTranslation(self, arg0: bool)
Use baseline information for disparity to depth conversion from specs (design data) or from calibration. Default: true
method
setExtendedDisparity(self, enable: bool)
Disparity range increased from 0-95 to 0-190, combined from full resolution and downscaled images.  Suitable for short range objects. Currently incompatible with sub-pixel disparity
method
setFocalLength(self, arg0: float)
Override focal length from calibration. Used only in disparity to depth conversion. Units are pixels.
method
method
setLeftRightCheck(self, enable: bool)
Computes and combines disparities in both L-R and R-L directions, and combine them.  For better occlusion handling, discarding invalid disparity values
method
setMeshStep(self, width: int, height: int)
Set the distance between mesh points. Default: (16, 16)
method
setNumFramesPool(self, arg0: int)
Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
method
setOutputKeepAspectRatio(self, keep: bool)
Specifies whether the frames resized by `setOutputSize` should preserve aspect ratio, with potential cropping when enabled. Default `true`
method
setOutputSize(self, width: int, height: int)
Specify disparity/depth output resolution size, implemented by scaling.  Currently only applicable when aligning to RGB camera
method
setPostProcessingHardwareResources(self, arg0: int, arg1: int)
Specify allocated hardware resources for stereo depth. Suitable only to increase post processing runtime.  Parameter ``numShaves``:     Number of shaves.  Parameter ``numMemorySlices``:     Number of memory slices.
method
setRectification(self, enable: bool)
Rectify input images or not.
method
setRectificationUseSpecTranslation(self, arg0: bool)
Obtain rectification matrices using spec translation (design data) or from calibration in calculations. Should be used only for debugging. Default: false
method
setRectifyEdgeFillColor(self, color: int)
Fill color for missing data at frame edges  Parameter ``color``:     Grayscale 0..255, or -1 to replicate pixels
method
setRuntimeModeSwitch(self, arg0: bool)
Enable runtime stereo mode switch, e.g. from standard to LR-check. Note: when enabled resources allocated for worst case to enable switching to any mode.
method
setSubpixel(self, enable: bool)
Computes disparity with sub-pixel interpolation (3 fractional bits by default).  Suitable for long range. Currently incompatible with extended disparity
method
setSubpixelFractionalBits(self, subpixelFractionalBits: int)
Number of fractional bits for subpixel mode. Default value: 3. Valid values: 3,4,5. Defines the number of fractional disparities: 2^x. Median filter postprocessing is supported only for 3 fractional bits.
method
useHomographyRectification(self, arg0: bool)
Use 3x3 homography matrix for stereo rectification instead of sparse mesh generated on device. Default behaviour is AUTO, for lenses with FOV over 85 degrees sparse mesh is used, otherwise 3x3 homography. If custom mesh data is provided through loadMeshData or loadMeshFiles this option is ignored.  Parameter ``useHomographyRectification``:     true: 3x3 homography matrix generated from calibration data is used for     stereo rectification, can't correct lens distortion. false: sparse mesh is     generated on-device from calibration data with mesh step specified with     setMeshStep (Default: (16, 16)), can correct lens distortion. Implementation     for generating the mesh is same as opencv's initUndistortRectifyMap     function. Only the first 8 distortion coefficients are used from calibration     data.
property
confidenceMap
Outputs ImgFrame message that carries RAW8 confidence map. Lower values mean lower confidence of the calculated disparity value. RGB alignment, left-right check or any postprocessing (e.g., median filter) is not performed on confidence map.
property
debugDispCostDump
Outputs ImgFrame message that carries cost dump of disparity map. Useful for debugging/fine tuning.
property
debugDispLrCheckIt1
Outputs ImgFrame message that carries left-right check first iteration (before combining with second iteration) disparity map. Useful for debugging/fine tuning.
property
debugDispLrCheckIt2
Outputs ImgFrame message that carries left-right check second iteration (before combining with first iteration) disparity map. Useful for debugging/fine tuning.
property
debugExtDispLrCheckIt1
Outputs ImgFrame message that carries extended left-right check first iteration (downscaled frame, before combining with second iteration) disparity map. Useful for debugging/fine tuning.
property
debugExtDispLrCheckIt2
Outputs ImgFrame message that carries extended left-right check second iteration (downscaled frame, before combining with first iteration) disparity map. Useful for debugging/fine tuning.
property
depth
Outputs ImgFrame message that carries RAW16 encoded (0..65535) depth data in depth units (millimeter by default).  Non-determined / invalid depth values are set to 0
property
disparity
Outputs ImgFrame message that carries RAW8 / RAW16 encoded disparity data: RAW8 encoded (0..95) for standard mode; RAW8 encoded (0..190) for extended disparity mode; RAW16 encoded for subpixel disparity mode: - 0..760 for 3 fractional bits (by default) - 0..1520 for 4 fractional bits - 0..3040 for 5 fractional bits
property
initialConfig
Initial config to use for StereoDepth.
property
inputAlignTo
Input align to message. Default queue is non-blocking with size 1.
property
inputConfig
Input StereoDepthConfig message with ability to modify parameters in runtime.
property
left
Input for left ImgFrame of left-right pair
property
outConfig
Outputs StereoDepthConfig message that contains current stereo configuration.
property
rectifiedLeft
Outputs ImgFrame message that carries RAW8 encoded (grayscale) rectified frame data.
property
rectifiedRight
Outputs ImgFrame message that carries RAW8 encoded (grayscale) rectified frame data.
property
right
Input for right ImgFrame of left-right pair
property
syncedLeft
Passthrough ImgFrame message from 'left' Input.
property
syncedRight
Passthrough ImgFrame message from 'right' Input.
class

depthai.node.Sync(depthai.DeviceNode)

method
getSyncAttempts(self) -> int: int
Gets the number of sync attempts
method
getSyncThreshold(self) -> datetime.timedelta: datetime.timedelta
Gets the maximal interval between messages in the group in milliseconds
method
runOnHost(self) -> bool: bool
Check if the node is set to run on host
method
setRunOnHost(self, runOnHost: bool)
Specify whether to run on host or device By default, the node will run on device.
method
setSyncAttempts(self, maxDataSize: int)
Set the number of attempts to get the specified max interval between messages in the group  Parameter ``syncAttempts``:     Number of attempts to get the specified max interval between messages in the     group: - if syncAttempts = 0 then the node sends a message as soon at the     group is filled - if syncAttempts > 0 then the node will make syncAttemts     attempts to synchronize before sending out a message - if syncAttempts = -1     (default) then the node will only send a message if successfully     synchronized
method
setSyncThreshold(self, syncThreshold: datetime.timedelta)
Set the maximal interval between messages in the group  Parameter ``syncThreshold``:     Maximal interval between messages in the group
property
inputs
A map of inputs
property
class

depthai.node.SystemLogger(depthai.DeviceNode)

method
getRate(self) -> float: float
Gets logging rate, at which messages will be sent out
method
setRate(self, hz: float)
Specify logging rate, at which messages will be sent out  Parameter ``hz``:     Sending rate in hertz (messages per second)
property
out
Outputs SystemInformation[S3] message that carries various system information like memory and CPU usage, temperatures, ... For series 2 devices outputs SystemInformation message, for series 3 devices outputs SystemInformationS3 message
class

depthai.node.Thermal(depthai.DeviceNode)

method
method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocket
Retrieves which board socket to use  Returns:     Board socket to use
property
color
Outputs YUV422i grayscale thermal image.
property
initialConfig
Initial config to use for thermal sensor.
property
inputConfig
Input ThermalConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
temperature
Outputs FP16 (degC) thermal image.
class

depthai.node.ToF(depthai.DeviceNode)

method
method
getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocket
Retrieves which board socket to use  Returns:     Board socket to use
property
property
property
initialConfig
Initial config to use for feature tracking.
property
inputConfig
Input ToFConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
property
property
class

depthai.node.UVC(depthai.DeviceNode)

method
setGpiosOnInit(self, list: dict [ int , int ])
Set GPIO list <gpio_number, value> for GPIOs to set (on/off) at init
method
setGpiosOnStreamOff(self, list: dict [ int , int ])
Set GPIO list <gpio_number, value> for GPIOs to set when streaming is disabled
method
setGpiosOnStreamOn(self, list: dict [ int , int ])
Set GPIO list <gpio_number, value> for GPIOs to set when streaming is enabled
property
input
Input for image frames to be streamed over UVC Default queue is blocking with size 8
class

depthai.node.VideoEncoder(depthai.DeviceNode)

method
method
method
method
method
method
method
getLossless(self) -> bool: bool
Get lossless mode. Applies only when using [M]JPEG profile.
method
method
getNumBFrames(self) -> int: int
Get number of B frames
method
getNumFramesPool(self) -> int: int
Get number of frames in pool  Returns:     Number of pool frames
method
method
method
method
setBitrate(self, bitrate: int)
Set output bitrate in bps, for CBR rate control mode. 0 for auto (based on frame size and FPS)
method
setBitrateKbps(self, bitrateKbps: int)
Set output bitrate in kbps, for CBR rate control mode. 0 for auto (based on frame size and FPS)
method
setDefaultProfilePreset(self, fps: float, profile: depthai.VideoEncoderProperties.Profile)
Sets a default preset based on specified frame rate and profile  Parameter ``fps``:     Frame rate in frames per second  Parameter ``profile``:     Encoding profile
method
setFrameRate(self, frameRate: float)
Sets expected frame rate  Parameter ``frameRate``:     Frame rate in frames per second
method
setKeyframeFrequency(self, freq: int)
Set keyframe frequency. Every Nth frame a keyframe is inserted.  Applicable only to H264 and H265 profiles  Examples:  - 30 FPS video, keyframe frequency: 30. Every 1s a keyframe will be inserted  - 60 FPS video, keyframe frequency: 180. Every 3s a keyframe will be inserted
method
setLossless(self, arg0: bool)
Set lossless mode. Applies only to [M]JPEG profile  Parameter ``lossless``:     True to enable lossless jpeg encoding, false otherwise
method
setMaxOutputFrameSize(self, maxFrameSize: int)
Specifies maximum output encoded frame size
method
setNumBFrames(self, numBFrames: int)
Set number of B frames to be inserted
method
setNumFramesPool(self, frames: int)
Set number of frames in pool  Parameter ``frames``:     Number of pool frames
method
method
setQuality(self, quality: int)
Set quality  Parameter ``quality``:     Value between 0-100%. Approximates quality
method
property
bitstream
Outputs ImgFrame message that carries BITSTREAM encoded (MJPEG, H264 or H265) frame data. Mutually exclusive with out.
property
input
Input for NV12 ImgFrame to be encoded
property
out
Outputs EncodedFrame message that carries encoded (MJPEG, H264 or H265) frame data. Mutually exclusive with bitstream.
class

depthai.node.Warp(depthai.DeviceNode)

method
getHwIds(self) -> list[int]: list[int]
Retrieve which hardware warp engines to use
method
method
setHwIds(self, arg0: list [ int ])
Specify which hardware warp engines to use  Parameter ``ids``:     Which warp engines to use (0, 1, 2)
method
setInterpolation(self, arg0: depthai.Interpolation)
Specify which interpolation method to use  Parameter ``interpolation``:     type of interpolation
method
setMaxOutputFrameSize(self, arg0: int)
Specify maximum size of output image.  Parameter ``maxFrameSize``:     Maximum frame size in bytes
method
setNumFramesPool(self, arg0: int)
Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
method
method
property
inputImage
Input image to be modified Default queue is blocking with size 8
property
out
Outputs ImgFrame message that carries warped image.
class

depthai.ADatatype

class

depthai.AprilTag

method
property
bottomLeft
The detected bottom left coordinates.
method
property
bottomRight
The detected bottom right coordinates.
method
property
decisionMargin
A measure of the quality of the binary decoding process; the average difference between the intensity of a data bit versus the decision threshold. Higher numbers roughly indicate better decodes. This is a reasonable measure of detection accuracy only for very small tags-- not effective for larger tags (where we could have sampled anywhere within a bit cell and still gotten a good detection.
method
property
hamming
How many error bits were corrected? Note: accepting large numbers of corrected errors leads to greatly increased false positive rates. As of this implementation, the detector cannot detect tags with a hamming distance greater than 2.
method
property
id
The decoded ID of the tag
method
property
topLeft
The detected top left coordinates.
method
property
topRight
The detected top right coordinates.
method
class

depthai.AprilTagConfig(depthai.Buffer)

class
Family
Supported AprilTag families.  Members:    TAG_36H11    TAG_36H10    TAG_25H9    TAG_16H5    TAG_CIR21H7    TAG_STAND41H12
class
QuadThresholds
AprilTag quad threshold parameters.
method
method
method
property
decodeSharpening
How much sharpening should be done to decoded images? This can help decode small tags but may or may not help in odd lighting conditions or low light conditions. The default value is 0.25.
method
property
family
AprilTag family.
method
property
maxHammingDistance
Max number of error bits that should be corrected. Accepting large numbers of corrected errors leads to greatly increased false positive rates. As of this implementation, the detector cannot detect tags with a hamming distance greater than 2.
method
property
quadDecimate
Detection of quads can be done on a lower-resolution image, improving speed at a cost of pose accuracy and a slight decrease in detection rate. Decoding the binary payload is still done at full resolution.
method
property
quadSigma
What Gaussian blur should be applied to the segmented image. Parameter is the standard deviation in pixels. Very noisy images benefit from non-zero values (e.g. 0.8).
method
property
quadThresholds
AprilTag quad threshold parameters.
method
property
refineEdges
When non-zero, the edges of the each quad are adjusted to "snap to" strong gradients nearby. This is useful when decimation is employed, as it can increase the quality of the initial quad estimate substantially. Generally recommended to be on. Very computationally inexpensive. Option is ignored if quadDecimate = 1.
method
class

depthai.AprilTagConfig.QuadThresholds

method
property
criticalDegree
Reject quads where pairs of edges have angles that are close to straight or close to 180 degrees. Zero means that no quads are rejected. (In degrees).
method
property
deglitch
Should the thresholded image be deglitched? Only useful for very noisy images
method
property
maxLineFitMse
When fitting lines to the contours, what is the maximum mean squared error allowed? This is useful in rejecting contours that are far from being quad shaped; rejecting these quads "early" saves expensive decoding processing.
method
property
maxNmaxima
How many corner candidates to consider when segmenting a group of pixels into a quad.
method
property
minClusterPixels
Reject quads containing too few pixels.
method
property
minWhiteBlackDiff
When we build our model of black & white pixels, we add an extra check that the white model must be (overall) brighter than the black model. How much brighter? (in pixel values: [0,255]).
method
class

depthai.AprilTagProperties

variable
property
inputConfigSync
Whether to wait for config at 'inputConfig' IO
method
property
numThreads
How many threads to use for AprilTag detection
method
class

depthai.AprilTags(depthai.Buffer)

variable
method
method
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
class

depthai.Asset

class

depthai.AssetManager

method
method
addExisting(self, assets: list [ Asset ])
Adds all assets in an array to the AssetManager  Parameter ``assets``:     Vector of assets to add
method
method
method
getRootPath(self) -> str: str
Get root path of the asset manager  Returns:     Root path
method
remove(self, key: str)
Removes asset with key  Parameter ``key``:     Key of asset to remove
method
method
size(self) -> int: int
Returns:     Number of asset stored in the AssetManager
class

depthai.BenchmarkReport(depthai.Buffer)

class

depthai.BoardConfig

class
GPIO
GPIO config
class
class
class
class
Network
Network configuration
class
UART
UART instance config
class
class
USB
USB related config
class
UVC
UVC configuration for USB descriptor
class
variable
variable
variable
variable
variable
method
property
emmc
eMMC config
method
property
logDevicePrints
log device prints
method
property
logPath
log path
method
property
logSizeMax
Max log size
method
property
logVerbosity
log verbosity
method
property
mipi4LaneRgb
MIPI 4Lane RGB config
method
property
method
property
sysctl
Optional list of FreeBSD sysctl parameters to be set (system, network, etc.). For example: "net.inet.tcp.delayed_ack=0" (this one is also set by default)
method
property
uart
UART instance map
method
property
usb3PhyInternalClock
USB3 phy config
method
property
watchdogTimeoutMs
Watchdog config
method
class

depthai.BoardConfig.GPIO

class
Direction
Members:    INPUT :     OUTPUT : 
class
Drive
Drive strength in mA (2, 4, 8 and 12mA)  Members:    MA_2 :     MA_4 :     MA_8 :     MA_12 : 
class
Level
Members:    LOW :     HIGH : 
class
Mode
Members:    ALT_MODE_0 :     ALT_MODE_1 :     ALT_MODE_2 :     ALT_MODE_3 :     ALT_MODE_4 :     ALT_MODE_5 :     ALT_MODE_6 :     DIRECT : 
class
Pull
Members:    NO_PULL :     PULL_UP :     PULL_DOWN :     BUS_KEEPER : 
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
method
class

depthai.BoardConfig.Network

class

depthai.BoardConfig.UART

class

depthai.BoardConfig.USB

class

depthai.BoardConfig.UVC

class

depthai.Buffer(depthai.ADatatype)

method
method
method
getData(self) -> numpy.ndarray[numpy.uint8]: numpy.ndarray[numpy.uint8]
Get non-owning reference to internal buffer  Returns:     Reference to internal buffer
method
method
method
getTimestampDevice(self: typing_extensions.Buffer) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
method
getVisualizationMessage(self: typing_extensions.Buffer) -> ImgAnnotations|ImgFrame|None: ImgAnnotations|ImgFrame|None
Get visualizable message  Returns:     Visualizable message, either ImgFrame, ImgAnnotations or std::monostate     (None)
method
method
method
setTimestamp(self: typing_extensions.Buffer, arg0: datetime.timedelta)
Sets image timestamp related to dai::Clock::now()
method
class

depthai.CalibrationHandler

static method
CalibrationHandler.fromJson(arg0: json) -> CalibrationHandler: CalibrationHandler
Construct a new Calibration Handler object from JSON EepromData.  Parameter ``eepromDataJson``:     EepromData as JSON
method
method
eepromToJson(self) -> json: json
Get JSON representation of calibration data  Returns:     JSON structure
method
eepromToJsonFile(self, destPath: Path) -> bool: bool
Write raw calibration/board data to json file.  Parameter ``destPath``:     Full path to the json file in which raw calibration data will be stored  Returns:     True on success, false otherwise
method
getBaselineDistance(self, cam1: CameraBoardSocket = ..., cam2: CameraBoardSocket = ..., useSpecTranslation: bool = True) -> float: float
Get the baseline distance between two specified cameras. By default it will get the baseline between CameraBoardSocket.RIGHT and CameraBoardSocket.LEFT.  Parameter ``cam1``:     First camera  Parameter ``cam2``:     Second camera  Parameter ``useSpecTranslation``:     Enabling this bool uses the translation information from the board design     data (not the calibration data)  Returns:     baseline distance in centimeters
method
getCameraExtrinsics(self, srcCamera: CameraBoardSocket, dstCamera: CameraBoardSocket, useSpecTranslation: bool = False) -> list[list[float]]: list[list[float]]
Get the Camera Extrinsics object between two cameras from the calibration data if there is a linked connection between any two cameras then the relative rotation and translation (in centimeters) is returned by this function.  Parameter ``srcCamera``:     Camera Id of the camera which will be considered as origin.  Parameter ``dstCamera``:     Camera Id of the destination camera to which we are fetching the rotation     and translation from the SrcCamera  Parameter ``useSpecTranslation``:     Enabling this bool uses the translation information from the board design     data  Returns:     a transformationMatrix which is 4x4 in homogeneous coordinate system  Matrix representation of transformation matrix \f[ \text{Transformation Matrix} = \left [ \begin{matrix} r_{00} & r_{01} & r_{02} & T_x \\ r_{10} & r_{11} & r_{12} & T_y \\ r_{20} & r_{21} & r_{22} & T_z \\ 0 & 0 & 0 & 1 \end{matrix} \right ] \f]
method
method
getCameraRotationMatrix(self, srcCamera: CameraBoardSocket, dstCamera: CameraBoardSocket) -> list[list[float]]: list[list[float]]
Get the Camera rotation matrix between two cameras from the calibration data.  Parameter ``srcCamera``:     Camera Id of the camera which will be considered as origin.  Parameter ``dstCamera``:     Camera Id of the destination camera to which we are fetching the rotation     vector from the SrcCamera  Returns:     a 3x3 rotation matrix Matrix representation of rotation matrix \f[     \text{Rotation Matrix} = \left [ \begin{matrix} r_{00} & r_{01} & r_{02}\\     r_{10} & r_{11} & r_{12}\\ r_{20} & r_{21} & r_{22}\\ \end{matrix} \right ]     \f]
method
getCameraToImuExtrinsics(self, cameraId: CameraBoardSocket, useSpecTranslation: bool = False) -> list[list[float]]: list[list[float]]
Get the Camera To Imu Extrinsics object From the data loaded if there is a linked connection between IMU and the given camera then there relative rotation and translation from the camera to IMU is returned.  Parameter ``cameraId``:     Camera Id of the camera which will be considered as origin. from which     Transformation matrix to the IMU will be found  Parameter ``useSpecTranslation``:     Enabling this bool uses the translation information from the board design     data  Returns:     Returns a transformationMatrix which is 4x4 in homogeneous coordinate system  Matrix representation of transformation matrix \f[ \text{Transformation Matrix} = \left [ \begin{matrix} r_{00} & r_{01} & r_{02} & T_x \\ r_{10} & r_{11} & r_{12} & T_y \\ r_{20} & r_{21} & r_{22} & T_z \\ 0 & 0 & 0 & 1 \end{matrix} \right ] \f]
method
getCameraTranslationVector(self, srcCamera: CameraBoardSocket, dstCamera: CameraBoardSocket, useSpecTranslation: bool = True) -> list[float]: list[float]
Get the Camera translation vector between two cameras from the calibration data.  Parameter ``srcCamera``:     Camera Id of the camera which will be considered as origin.  Parameter ``dstCamera``:     Camera Id of the destination camera to which we are fetching the translation     vector from the SrcCamera  Parameter ``useSpecTranslation``:     Disabling this bool uses the translation information from the calibration     data (not the board design data)  Returns:     a translation vector like [x, y, z] in centimeters
method
getDefaultIntrinsics(self, cameraId: CameraBoardSocket) -> tuple[list[list[float]], int, int]: tuple[list[list[float]], int, int]
Get the Default Intrinsics object  Parameter ``cameraId``:     Uses the cameraId to identify which camera intrinsics to return  Returns:     Represents the 3x3 intrinsics matrix of the respective camera along with     width and height at which it was calibrated.  Matrix representation of intrinsic matrix \f[ \text{Intrinsic Matrix} = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \f]
method
getDistortionCoefficients(self, cameraId: CameraBoardSocket) -> list[float]: list[float]
Get the Distortion Coefficients object  Parameter ``cameraId``:     Uses the cameraId to identify which distortion Coefficients to return.  Returns:     the distortion coefficients of the requested camera in this order:     [k1,k2,p1,p2,k3,k4,k5,k6,s1,s2,s3,s4,τx,τy] for CameraModel::Perspective or     [k1, k2, k3, k4] for CameraModel::Fisheye see     https://docs.opencv.org/4.5.4/d9/d0c/group__calib3d.html for Perspective     model (Rational Polynomial Model) see     https://docs.opencv.org/4.5.4/db/d58/group__calib3d__fisheye.html for     Fisheye model
method
getDistortionModel(self, cameraId: CameraBoardSocket) -> CameraModel: CameraModel
Get the distortion model of the given camera  Parameter ``cameraId``:     of the camera with lens position is requested.  Returns:     lens position of the camera with given cameraId at which it was calibrated.
method
getEepromData(self) -> EepromData: EepromData
Get the Eeprom Data object  Returns:     EepromData object which contains the raw calibration data
method
getFov(self, cameraId: CameraBoardSocket, useSpec: bool = True) -> float: float
Get the Fov of the camera  Parameter ``cameraId``:     of the camera of which we are fetching fov.  Parameter ``useSpec``:     Disabling this bool will calculate the fov based on intrinsics (focal     length, image width), instead of getting it from the camera specs  Returns:     field of view of the camera with given cameraId.
method
getImuToCameraExtrinsics(self, cameraId: CameraBoardSocket, useSpecTranslation: bool = False) -> list[list[float]]: list[list[float]]
Get the Imu To Camera Extrinsics object from the data loaded if there is a linked connection between IMU and the given camera then there relative rotation and translation from the IMU to Camera is returned.  Parameter ``cameraId``:     Camera Id of the camera which will be considered as destination. To which     Transformation matrix from the IMU will be found.  Parameter ``useSpecTranslation``:     Enabling this bool uses the translation information from the board design     data  Returns:     Returns a transformationMatrix which is 4x4 in homogeneous coordinate system  Matrix representation of transformation matrix \f[ \text{Transformation Matrix} = \left [ \begin{matrix} r_{00} & r_{01} & r_{02} & T_x \\ r_{10} & r_{11} & r_{12} & T_y \\ r_{20} & r_{21} & r_{22} & T_z \\ 0 & 0 & 0 & 1 \end{matrix} \right ] \f]
method
getLensPosition(self, cameraId: CameraBoardSocket) -> int: int
Get the lens position of the given camera  Parameter ``cameraId``:     of the camera with lens position is requested.  Returns:     lens position of the camera with given cameraId at which it was calibrated.
method
getStereoLeftCameraId(self) -> CameraBoardSocket: CameraBoardSocket
Get the camera id of the camera which is used as left camera of the stereo setup  Returns:     cameraID of the camera used as left camera
method
getStereoLeftRectificationRotation(self) -> list[list[float]]: list[list[float]]
Get the Stereo Left Rectification Rotation object  Returns:     returns a 3x3 rectification rotation matrix
method
getStereoRightCameraId(self) -> CameraBoardSocket: CameraBoardSocket
Get the camera id of the camera which is used as right camera of the stereo setup  Returns:     cameraID of the camera used as right camera
method
getStereoRightRectificationRotation(self) -> list[list[float]]: list[list[float]]
Get the Stereo Right Rectification Rotation object  Returns:     returns a 3x3 rectification rotation matrix
method
method
setCameraExtrinsics(self, srcCameraId: CameraBoardSocket, destCameraId: CameraBoardSocket, rotationMatrix: list [ list [ float ] ], translation: list [ float ], specTranslation: list [ float ] = [0.0, 0.0, 0.0])
Set the Camera Extrinsics object  Parameter ``srcCameraId``:     Camera Id of the camera which will be considered as relative origin.  Parameter ``destCameraId``:     Camera Id of the camera which will be considered as destination from     srcCameraId.  Parameter ``rotationMatrix``:     Rotation between srcCameraId and destCameraId origins.  Parameter ``translation``:     Translation between srcCameraId and destCameraId origins.  Parameter ``specTranslation``:     Translation between srcCameraId and destCameraId origins from the design.
method
method
setCameraType(self, cameraId: CameraBoardSocket, cameraModel: CameraModel)
Set the Camera Type object  Parameter ``cameraId``:     CameraId of the camera for which cameraModel Type is being updated.  Parameter ``cameraModel``:     Type of the model the camera represents
method
setDeviceName(self, deviceName: str)
Set the deviceName which responses to getDeviceName of Device  Parameter ``deviceName``:     Sets device name.
method
setDistortionCoefficients(self, cameraId: CameraBoardSocket, distortionCoefficients: list [ float ])
Sets the distortion Coefficients obtained from camera calibration  Parameter ``cameraId``:     Camera Id of the camera for which distortion coefficients are computed  Parameter ``distortionCoefficients``:     Distortion Coefficients of the respective Camera.
method
setFov(self, cameraId: CameraBoardSocket, hfov: float)
Set the Fov of the Camera  Parameter ``cameraId``:     Camera Id of the camera  Parameter ``hfov``:     Horizontal fov of the camera from Camera Datasheet
method
setImuExtrinsics(self, destCameraId: CameraBoardSocket, rotationMatrix: list [ list [ float ] ], translation: list [ float ], specTranslation: list [ float ] = [0.0, 0.0, 0.0])
Set the Imu to Camera Extrinsics object  Parameter ``destCameraId``:     Camera Id of the camera which will be considered as destination from IMU.  Parameter ``rotationMatrix``:     Rotation between srcCameraId and destCameraId origins.  Parameter ``translation``:     Translation between IMU and destCameraId origins.  Parameter ``specTranslation``:     Translation between IMU and destCameraId origins from the design.
method
setLensPosition(self, cameraId: CameraBoardSocket, lensPosition: int)
Sets the distortion Coefficients obtained from camera calibration  Parameter ``cameraId``:     Camera Id of the camera  Parameter ``lensPosition``:     lens posiotion value of the camera at the time of calibration
method
setProductName(self, productName: str)
Set the productName which acts as alisas for users to identify the device  Parameter ``productName``:     Sets product name (alias).
method
setStereoLeft(self, cameraId: CameraBoardSocket, rectifiedRotation: list [ list [ float ] ])
Set the Stereo Left Rectification object  Parameter ``cameraId``:     CameraId of the camera which will be used as left Camera of stereo Setup  Parameter ``rectifiedRotation``:     Rectification rotation of the left camera required for feature matching  Homography of the Left Rectification = Intrinsics_right * rectifiedRotation * inv(Intrinsics_left)
method
setStereoRight(self, cameraId: CameraBoardSocket, rectifiedRotation: list [ list [ float ] ])
Set the Stereo Right Rectification object  Parameter ``cameraId``:     CameraId of the camera which will be used as left Camera of stereo Setup  Parameter ``rectifiedRotation``:     Rectification rotation of the left camera required for feature matching  Homography of the Right Rectification = Intrinsics_right * rectifiedRotation * inv(Intrinsics_right)
class

depthai.CameraControl(depthai.Buffer)

class
AntiBandingMode
Members:    OFF    MAINS_50_HZ    MAINS_60_HZ    AUTO
class
AutoFocusMode
Members:    OFF    AUTO    MACRO    CONTINUOUS_VIDEO    CONTINUOUS_PICTURE    EDOF
class
AutoWhiteBalanceMode
Members:    OFF    AUTO    INCANDESCENT    FLUORESCENT    WARM_FLUORESCENT    DAYLIGHT    CLOUDY_DAYLIGHT    TWILIGHT    SHADE
class
CaptureIntent
Members:    CUSTOM    PREVIEW    STILL_CAPTURE    VIDEO_RECORD    VIDEO_SNAPSHOT    ZERO_SHUTTER_LAG
class
Command
Members:    START_STREAM    STOP_STREAM    STILL_CAPTURE    MOVE_LENS    AF_TRIGGER    AE_MANUAL    AE_AUTO    AWB_MODE    SCENE_MODE    ANTIBANDING_MODE    EXPOSURE_COMPENSATION    AE_LOCK    AE_TARGET_FPS_RANGE    AWB_LOCK    CAPTURE_INTENT    CONTROL_MODE    FRAME_DURATION    SENSITIVITY    EFFECT_MODE    AF_MODE    NOISE_REDUCTION_STRENGTH    SATURATION    BRIGHTNESS    STREAM_FORMAT    RESOLUTION    SHARPNESS    CUSTOM_USECASE    CUSTOM_CAPT_MODE    CUSTOM_EXP_BRACKETS    CUSTOM_CAPTURE    CONTRAST    AE_REGION    AF_REGION    LUMA_DENOISE    CHROMA_DENOISE    WB_COLOR_TEMP
class
ControlMode
Members:    OFF    AUTO    USE_SCENE_MODE
class
EffectMode
Members:    OFF    MONO    NEGATIVE    SOLARIZE    SEPIA    POSTERIZE    WHITEBOARD    BLACKBOARD    AQUA
class
FrameSyncMode
Members:    OFF    OUTPUT    INPUT
class
SceneMode
Members:    UNSUPPORTED    FACE_PRIORITY    ACTION    PORTRAIT    LANDSCAPE    NIGHT    NIGHT_PORTRAIT    THEATRE    BEACH    SNOW    SUNSET    STEADYPHOTO    FIREWORKS    SPORTS    PARTY    CANDLELIGHT    BARCODE
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
method
method
method
method
clearMiscControls(self)
Clear the list of miscellaneous controls set by `setControl`
method
getCaptureStill(self) -> bool: bool
Check whether command to capture a still is set  Returns:     True if capture still command is set
method
method
method
getHdr(self) -> bool: bool
Whether or not HDR (High Dynamic Range) mode is enabled  Returns:     True if HDR mode is enabled, false otherwise
method
getLensPosition(self) -> int: int
Retrieves lens position, range 0..255. Returns -1 if not available
method
getLensPositionRaw(self) -> float: float
Retrieves lens position, range 0.0f..1.0f.
method
getMiscControls(self) -> list[tuple[str, str]]: list[tuple[str, str]]
Get the list of miscellaneous controls set by `setControl`  Returns:     A list of <key, value> pairs as strings
method
getSensitivity(self) -> int: int
Retrieves sensitivity, as an ISO value
method
setAntiBandingMode(self, mode: CameraControl.AntiBandingMode) -> CameraControl: CameraControl
Set a command to specify anti-banding mode. Anti-banding / anti-flicker works in auto-exposure mode, by controlling the exposure time to be applied in multiples of half the mains period, for example in multiple of 10ms for 50Hz (period 20ms) AC-powered illumination sources.  If the scene would be too bright for the smallest exposure step (10ms in the example, with ISO at a minimum of 100), anti-banding is not effective.  Parameter ``mode``:     Anti-banding mode to use. Default: `MAINS_50_HZ`
method
setAutoExposureCompensation(self, compensation: int) -> CameraControl: CameraControl
Set a command to specify auto exposure compensation  Parameter ``compensation``:     Compensation value between -9..9, default 0
method
method
method
setAutoExposureLock(self, lock: bool) -> CameraControl: CameraControl
Set a command to specify lock auto exposure  Parameter ``lock``:     Auto exposure lock mode enabled or disabled
method
setAutoExposureRegion(self, startX: int, startY: int, width: int, height: int) -> CameraControl: CameraControl
Set a command to specify auto exposure region in pixels. Note: the region should be mapped to the configured sensor resolution, before ISP scaling  Parameter ``startX``:     X coordinate of top left corner of region  Parameter ``startY``:     Y coordinate of top left corner of region  Parameter ``width``:     Region width  Parameter ``height``:     Region height
method
setAutoFocusLensRange(self, infinityPosition: int, macroPosition: int) -> CameraControl: CameraControl
Set autofocus lens range, `infinityPosition < macroPosition`, valid values `0..255`. May help to improve autofocus in case the lens adjustment is not typical/tuned
method
setAutoFocusMode(self, mode: CameraControl.AutoFocusMode) -> CameraControl: CameraControl
Set a command to specify autofocus mode. Default `CONTINUOUS_VIDEO`
method
setAutoFocusRegion(self, startX: int, startY: int, width: int, height: int) -> CameraControl: CameraControl
Set a command to specify focus region in pixels. Note: the region should be mapped to the configured sensor resolution, before ISP scaling  Parameter ``startX``:     X coordinate of top left corner of region  Parameter ``startY``:     Y coordinate of top left corner of region  Parameter ``width``:     Region width  Parameter ``height``:     Region height
method
method
setAutoWhiteBalanceLock(self, lock: bool) -> CameraControl: CameraControl
Set a command to specify auto white balance lock  Parameter ``lock``:     Auto white balance lock mode enabled or disabled
method
setAutoWhiteBalanceMode(self, mode: CameraControl.AutoWhiteBalanceMode) -> CameraControl: CameraControl
Set a command to specify auto white balance mode  Parameter ``mode``:     Auto white balance mode to use. Default `AUTO`
method
setBrightness(self, value: int) -> CameraControl: CameraControl
Set a command to adjust image brightness  Parameter ``value``:     Brightness, range -10..10, default 0
method
setCaptureIntent(self, mode: CameraControl.CaptureIntent) -> CameraControl: CameraControl
Set a command to specify capture intent mode  Parameter ``mode``:     Capture intent mode
method
method
setChromaDenoise(self, value: int) -> CameraControl: CameraControl
Set a command to adjust chroma denoise amount  Parameter ``value``:     Chroma denoise amount, range 0..4, default 1
method
method
setContrast(self, value: int) -> CameraControl: CameraControl
Set a command to adjust image contrast  Parameter ``value``:     Contrast, range -10..10, default 0
method
setControlMode(self, mode: CameraControl.ControlMode) -> CameraControl: CameraControl
Set a command to specify control mode  Parameter ``mode``:     Control mode
method
setEffectMode(self, mode: CameraControl.EffectMode) -> CameraControl: CameraControl
Set a command to specify effect mode  Parameter ``mode``:     Effect mode
method
setExternalTrigger(self, numFramesBurst: int, numFramesDiscard: int) -> CameraControl: CameraControl
Set a command to enable external trigger snapshot mode  A rising edge on the sensor FSIN pin will make it capture a sequence of `numFramesBurst` frames. First `numFramesDiscard` will be skipped as configured (can be set to 0 as well), as they may have degraded quality
method
setFrameSyncMode(self, mode: CameraControl.FrameSyncMode) -> CameraControl: CameraControl
Set the frame sync mode for continuous streaming operation mode, translating to how the camera pin FSIN/FSYNC is used: input/output/disabled
method
setHdr(self, enable: bool) -> CameraControl: CameraControl
Whether or not to enable HDR (High Dynamic Range) mode  Parameter ``enable``:     True to enable HDR mode, false to disable
method
setLumaDenoise(self, value: int) -> CameraControl: CameraControl
Set a command to adjust luma denoise amount  Parameter ``value``:     Luma denoise amount, range 0..4, default 1
method
method
setManualFocus(self, lensPosition: int) -> CameraControl: CameraControl
Set a command to specify manual focus position  Parameter ``lensPosition``:     specify lens position 0..255
method
setManualFocusRaw(self, lensPositionRaw: float) -> CameraControl: CameraControl
Set a command to specify manual focus position (more precise control).  Parameter ``lensPositionRaw``:     specify lens position 0.0f .. 1.0f  Returns:     CameraControl&
method
setManualWhiteBalance(self, colorTemperatureK: int) -> CameraControl: CameraControl
Set a command to manually specify white-balance color correction  Parameter ``colorTemperatureK``:     Light source color temperature in kelvins, range 1000..12000
method
method
setSaturation(self, value: int) -> CameraControl: CameraControl
Set a command to adjust image saturation  Parameter ``value``:     Saturation, range -10..10, default 0
method
setSceneMode(self, mode: CameraControl.SceneMode) -> CameraControl: CameraControl
Set a command to specify scene mode  Parameter ``mode``:     Scene mode
method
setSharpness(self, value: int) -> CameraControl: CameraControl
Set a command to adjust image sharpness  Parameter ``value``:     Sharpness, range 0..4, default 1
method
method
method
method
setStrobeExternal(self, gpioNumber: int, activeLevel: int) -> CameraControl: CameraControl
Enable STROBE output driven by a MyriadX GPIO, optionally configuring the polarity This normally requires a FSIN/FSYNC/trigger input for MyriadX (usually GPIO 41), to generate timings
method
setStrobeSensor(self, activeLevel: int) -> CameraControl: CameraControl
Enable STROBE output on sensor pin, optionally configuring the polarity. Note: for many sensors the polarity is high-active and not configurable
class

depthai.CameraControl.Command

variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
class

depthai.CameraSensorConfig

class

depthai.CapabilityRangeFloat

class

depthai.CapabilityRangeUint

class

depthai.ChipTemperature

variable
variable
variable
variable
variable
method
class

depthai.ChipTemperatureS3

variable
variable
variable
variable
variable
method
class

depthai.CircleAnnotation

class

depthai.Color

variable
variable
variable
variable
method
class

depthai.ColorCameraProperties

class
ColorOrder
For 24 bit color these can be either RGB or BGR  Members:    BGR    RGB
class
SensorResolution
Select the camera sensor resolution  Members:    THE_1080_P    THE_1200_P    THE_4_K    THE_5_MP    THE_12_MP    THE_4000X3000    THE_13_MP    THE_5312X6000    THE_48_MP    THE_720_P    THE_800_P    THE_240X180    THE_1280X962    THE_2000X1500    THE_2028X1520    THE_2104X1560    THE_1440X1080    THE_1352X1012    THE_2024X1520
class
WarpMeshSource
Warp mesh source  Members:    AUTO    NONE    CALIBRATION    URI
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
class

depthai.ColorCameraProperties.SensorResolution

variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
class

depthai.CpuUsage

class

depthai.CrashDump.CrashReport.ErrorSourceInfo.AssertContext

class

depthai.CrashDump.CrashReport.ErrorSourceInfo.TrapContext

class

depthai.CrashDump.CrashReport.ThreadCallstack.CallstackContext

class

depthai.DatatypeEnum

variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
class

depthai.DetectionParserProperties

class

depthai.Device(depthai.DeviceBase)

class
Config
Device specific configuration
class
ReconnectionStatus
Members:    RECONNECT_FAILED    RECONNECTED    RECONNECTING
method
method
getPlatform(self) -> Platform: Platform
Get the platform of the connected device  Returns:     Platform Platform enum
method
getPlatformAsString(self) -> str: str
Get the platform of the connected device as string  Returns:     std::string String representation of Platform
class

depthai.DeviceBase

static method
DeviceBase.getAllAvailableDevices() -> list[DeviceInfo]: list[DeviceInfo]
Returns all available devices  Returns:     Vector of available devices
static method
DeviceBase.getAllConnectedDevices() -> list[DeviceInfo]: list[DeviceInfo]
Returns information of all connected devices. The devices could be both connectable as well as already connected to devices.  Returns:     Vector of connected device information
static method
static method
DeviceBase.getDeviceById(deviceId: str) -> tuple[bool, DeviceInfo]: tuple[bool, DeviceInfo]
Finds a device by Device ID. Example: 14442C10D13EABCE00  Parameter ``deviceId``:     Device ID which uniquely specifies a device  Returns:     Tuple of bool and DeviceInfo. Bool specifies if device was found. DeviceInfo     specifies the found device
static method
static method
DeviceBase.getFirstAvailableDevice(skipInvalidDevices: bool = True) -> tuple[bool, DeviceInfo]: tuple[bool, DeviceInfo]
Gets first available device. Device can be either in XLINK_UNBOOTED or XLINK_BOOTLOADER state  Returns:     Tuple of bool and DeviceInfo. Bool specifies if device was found. DeviceInfo     specifies the found device
static method
DeviceBase.getGlobalProfilingData() -> ProfilingData: ProfilingData
Get current global accumulated profiling data  Returns:     ProfilingData from all devices
method
method
method
method
addLogCallback(self, callback: typing.Callable [ [ LogMessage ] , None ]) -> int: int
Add a callback for device logging. The callback will be called from a separate thread with the LogMessage being passed.  Parameter ``callback``:     Callback to call whenever a log message arrives  Returns:     Id which can be used to later remove the callback
method
close(self)
Closes the connection to device. Better alternative is the usage of context manager: `with depthai.Device(pipeline) as device:`
method
crashDevice(self)
Crashes the device $.. warning::  ONLY FOR TESTING PURPOSES, it causes an unrecoverable crash on the device
method
factoryResetCalibration(self)
Factory reset EEPROM data if factory backup is available.  Throws:     std::runtime_exception If factory reset was unsuccessful
method
flashCalibration(self, calibrationDataHandler: CalibrationHandler) -> bool: bool
Stores the Calibration and Device information to the Device EEPROM  Parameter ``calibrationObj``:     CalibrationHandler object which is loaded with calibration information.  Returns:     true on successful flash, false on failure
method
flashCalibration2(self, arg0: CalibrationHandler)
Stores the Calibration and Device information to the Device EEPROM  Throws:     std::runtime_exception if failed to flash the calibration  Parameter ``calibrationObj``:     CalibrationHandler object which is loaded with calibration information.
method
flashEepromClear(self)
Destructive action, deletes User area EEPROM contents Requires PROTECTED permissions  Throws:     std::runtime_exception if failed to flash the calibration  Returns:     True on successful flash, false on failure
method
flashFactoryCalibration(self, arg0: CalibrationHandler)
Stores the Calibration and Device information to the Device EEPROM in Factory area To perform this action, correct env variable must be set  Throws:     std::runtime_exception if failed to flash the calibration  Returns:     True on successful flash, false on failure
method
flashFactoryEepromClear(self)
Destructive action, deletes Factory area EEPROM contents Requires FACTORY PROTECTED permissions  Throws:     std::runtime_exception if failed to flash the calibration  Returns:     True on successful flash, false on failure
method
getAvailableStereoPairs(self) -> list[StereoPair]: list[StereoPair]
Get stereo pairs taking into account the calibration and connected cameras.  @note This method will always return a subset of `getStereoPairs`.  Returns:     Vector of stereo pairs
method
getBootloaderVersion(self) -> Version|None: Version|None
Gets Bootloader version if it was booted through Bootloader  Returns:     DeviceBootloader::Version if booted through Bootloader or none otherwise
method
getCameraSensorNames(self) -> dict[CameraBoardSocket, str]: dict[CameraBoardSocket, str]
Get sensor names for cameras that are connected to the device  Returns:     Map/dictionary with camera sensor names, indexed by socket
method
getChipTemperature(self) -> ChipTemperature: ChipTemperature
Retrieves current chip temperature as measured by device  Returns:     Temperature of various onboard sensors
method
getCmxMemoryUsage(self) -> MemoryInfo: MemoryInfo
Retrieves current CMX memory information from device  Returns:     Used, remaining and total cmx memory
method
getConnectedCameraFeatures(self) -> list[CameraFeatures]: list[CameraFeatures]
Get cameras that are connected to the device with their features/properties  Returns:     Vector of connected camera features
method
getConnectedCameras(self) -> list[CameraBoardSocket]: list[CameraBoardSocket]
Get cameras that are connected to the device  Returns:     Vector of connected cameras
method
getConnectedIMU(self) -> str: str
Get connected IMU type  Returns:     IMU type
method
getConnectionInterfaces(self) -> list[connectionInterface]: list[connectionInterface]
Get connection interfaces for device  Returns:     Vector of connection type
method
method
getDdrMemoryUsage(self) -> MemoryInfo: MemoryInfo
Retrieves current DDR memory information from device  Returns:     Used, remaining and total ddr memory
method
getDeviceId(self) -> str: str
Get DeviceId of device  Returns:     DeviceId of connected device
method
getDeviceInfo(self) -> DeviceInfo: DeviceInfo
Get the Device Info object o the device which is currently running  Returns:     DeviceInfo of the current device in execution
method
getDeviceName(self) -> typing.Any: typing.Any
Get device name if available  Returns:     device name or empty string if not available
method
getEmbeddedIMUFirmwareVersion(self) -> Version: Version
Get embedded IMU firmware version to which IMU can be upgraded  Returns:     Get embedded IMU firmware version to which IMU can be upgraded.
method
getIMUFirmwareUpdateStatus(self) -> tuple[bool, float]: tuple[bool, float]
Get IMU firmware update status  Returns:     Whether IMU firmware update is done and last firmware update progress as     percentage. return value true and 100 means that the update was successful     return value true and other than 100 means that the update failed
method
getIMUFirmwareVersion(self) -> Version: Version
Get connected IMU firmware version  Returns:     IMU firmware version
method
getIrDrivers(self) -> list[tuple[str, int, int]]: list[tuple[str, int, int]]
Retrieves detected IR laser/LED drivers.  Returns:     Vector of tuples containing: driver name, I2C bus, I2C address. For OAK-D-     Pro it should be `[{"LM3644", 2, 0x63}]`
method
getLeonCssCpuUsage(self) -> CpuUsage: CpuUsage
Retrieves average CSS Leon CPU usage  Returns:     Average CPU usage and sampling duration
method
getLeonCssHeapUsage(self) -> MemoryInfo: MemoryInfo
Retrieves current CSS Leon CPU heap information from device  Returns:     Used, remaining and total heap memory
method
getLeonMssCpuUsage(self) -> CpuUsage: CpuUsage
Retrieves average MSS Leon CPU usage  Returns:     Average CPU usage and sampling duration
method
getLeonMssHeapUsage(self) -> MemoryInfo: MemoryInfo
Retrieves current MSS Leon CPU heap information from device  Returns:     Used, remaining and total heap memory
method
getLogLevel(self) -> LogLevel: LogLevel
Gets current logging severity level of the device.  Returns:     Logging severity level
method
getLogOutputLevel(self) -> LogLevel: LogLevel
Gets logging level which decides printing level to standard output.  Returns:     Standard output printing severity
method
getMxId(self) -> str: str
Get MxId of device  Returns:     MxId of connected device
method
getProductName(self) -> typing.Any: typing.Any
Get product name if available  Returns:     product name or empty string if not available
method
getProfilingData(self) -> ProfilingData: ProfilingData
Get current accumulated profiling data  Returns:     ProfilingData from the specific device
method
getStereoPairs(self) -> list[StereoPair]: list[StereoPair]
Get stereo pairs based on the device type.  Returns:     Vector of stereo pairs
method
getSystemInformationLoggingRate(self) -> float: float
Gets current rate of system information logging ("info" severity) in Hz.  Returns:     Logging rate in Hz
method
getUsbSpeed(self) -> UsbSpeed: UsbSpeed
Retrieves USB connection speed  Returns:     USB connection speed of connected device if applicable. Unknown otherwise.
method
getXLinkChunkSize(self) -> int: int
Gets current XLink chunk size.  Returns:     XLink chunk size in bytes
method
hasCrashDump(self) -> bool: bool
Retrieves whether the is crash dump stored on device or not.
method
isClosed(self) -> bool: bool
Is the device already closed (or disconnected)  .. warning::     This function is thread-unsafe and may return outdated incorrect values. It     is only meant for use in simple single-threaded code. Well written code     should handle exceptions when calling any DepthAI apis to handle hardware     events and multithreaded use.
method
isEepromAvailable(self) -> bool: bool
Check if EEPROM is available  Returns:     True if EEPROM is present on board, false otherwise
method
isPipelineRunning(self) -> bool: bool
Checks if devices pipeline is already running  Returns:     True if running, false otherwise
method
readCalibration(self) -> CalibrationHandler: CalibrationHandler
Fetches the EEPROM data from the device and loads it into CalibrationHandler object If no calibration is flashed, it returns default  Returns:     The CalibrationHandler object containing the calibration currently flashed     on device EEPROM
method
readCalibration2(self) -> CalibrationHandler: CalibrationHandler
Fetches the EEPROM data from the device and loads it into CalibrationHandler object  Throws:     std::runtime_exception if no calibration is flashed  Returns:     The CalibrationHandler object containing the calibration currently flashed     on device EEPROM
method
readCalibrationOrDefault(self) -> CalibrationHandler: CalibrationHandler
Fetches the EEPROM data from the device and loads it into CalibrationHandler object If no calibration is flashed, it returns default  Returns:     The CalibrationHandler object containing the calibration currently flashed     on device EEPROM
method
readCalibrationRaw(self) -> ...: ...
Fetches the raw EEPROM data from User area  Throws:     std::runtime_exception if any error occurred  Returns:     Binary dump of User area EEPROM data
method
readFactoryCalibration(self) -> CalibrationHandler: CalibrationHandler
Fetches the EEPROM data from Factory area and loads it into CalibrationHandler object  Throws:     std::runtime_exception if no calibration is flashed  Returns:     The CalibrationHandler object containing the calibration currently flashed     on device EEPROM in Factory Area
method
readFactoryCalibrationOrDefault(self) -> CalibrationHandler: CalibrationHandler
Fetches the EEPROM data from Factory area and loads it into CalibrationHandler object If no calibration is flashed, it returns default  Returns:     The CalibrationHandler object containing the calibration currently flashed     on device EEPROM in Factory Area
method
readFactoryCalibrationRaw(self) -> ...: ...
Fetches the raw EEPROM data from Factory area  Throws:     std::runtime_exception if any error occurred  Returns:     Binary dump of Factory area EEPROM data
method
removeLogCallback(self, callbackId: int) -> bool: bool
Removes a callback  Parameter ``callbackId``:     Id of callback to be removed  Returns:     True if callback was removed, false otherwise
method
setIrFloodLightIntensity(self, intensity: float, mask: int = -1) -> bool: bool
Sets the intensity of the IR Flood Light. Limits: Intensity is directly normalized to 0 - 1500mA current. The duty cycle is 30% when exposure time is longer than 30% frame time. Otherwise, duty cycle is 100% of exposure time. The duty cycle is controlled by the `left` camera STROBE, aligned to start of exposure. The emitter is turned off by default  Parameter ``intensity``:     Intensity on range 0 to 1, that will determine brightness, 0 or negative to     turn off  Parameter ``mask``:     Optional mask to modify only Left (0x1) or Right (0x2) sides on OAK-D-Pro-W-     DEV  Returns:     True on success, false if not found or other failure
method
setIrLaserDotProjectorIntensity(self, intensity: float, mask: int = -1) -> bool: bool
Sets the intensity of the IR Laser Dot Projector. Limits: up to 765mA at 30% frame time duty cycle when exposure time is longer than 30% frame time. Otherwise, duty cycle is 100% of exposure time, with current increased up to max 1200mA to make up for shorter duty cycle. The duty cycle is controlled by `left` camera STROBE, aligned to start of exposure. The emitter is turned off by default  Parameter ``intensity``:     Intensity on range 0 to 1, that will determine brightness. 0 or negative to     turn off  Parameter ``mask``:     Optional mask to modify only Left (0x1) or Right (0x2) sides on OAK-D-Pro-W-     DEV  Returns:     True on success, false if not found or other failure
method
setLogLevel(self, level: LogLevel)
Sets the devices logging severity level. This level affects which logs are transferred from device to host.  Parameter ``level``:     Logging severity
method
setLogOutputLevel(self, level: LogLevel)
Sets logging level which decides printing level to standard output. If lower than setLogLevel, no messages will be printed  Parameter ``level``:     Standard output printing severity
method
setMaxReconnectionAttempts(self, maxAttempts: int, callback: typing.Callable [ [ Device.ReconnectionStatus ] , None ] = None)
Sets max number of automatic reconnection attempts  Parameter ``maxAttempts``:     Maximum number of reconnection attempts, 0 to disable reconnection  Parameter ``callBack``:     Callback to be called when reconnection is attempted
method
setSystemInformationLoggingRate(self, rateHz: float)
Sets rate of system information logging ("info" severity). Default 1Hz If parameter is less or equal to zero, then system information logging will be disabled  Parameter ``rateHz``:     Logging rate in Hz
method
method
setXLinkChunkSize(self, sizeBytes: int)
Sets the chunk size for splitting device-sent XLink packets. A larger value could increase performance, and 0 disables chunking. A negative value is ignored. Device defaults are configured per protocol, currently 64*1024 for both USB and Ethernet.  Parameter ``sizeBytes``:     XLink chunk size in bytes
method
startIMUFirmwareUpdate(self, forceUpdate: bool = False) -> bool: bool
Starts IMU firmware update asynchronously only if IMU node is not running. If current firmware version is the same as embedded firmware version then it's no- op. Can be overridden by forceUpdate parameter. State of firmware update can be monitored using getIMUFirmwareUpdateStatus API.  Parameter ``forceUpdate``:     Force firmware update or not. Will perform FW update regardless of current     version and embedded firmware version.  Returns:     Returns whether firmware update can be started. Returns false if IMU node is     started.
class

depthai.DeviceBootloader

class
class
class
Memory
Members:    AUTO    FLASH    EMMC
class
class
class
Section
Members:    AUTO    HEADER    BOOTLOADER    BOOTLOADER_CONFIG    APPLICATION
class
Type
Members:    AUTO    USB    NETWORK
class
static method
static method
DeviceBootloader.getAllAvailableDevices() -> list[DeviceInfo]: list[DeviceInfo]
Searches for connected devices in either UNBOOTED or BOOTLOADER states.  Returns:     Vector of all found devices
static method
static method
static method
DeviceBootloader.getFirstAvailableDevice() -> tuple[bool, DeviceInfo]: tuple[bool, DeviceInfo]
Searches for connected devices in either UNBOOTED or BOOTLOADER states and returns first available.  Returns:     Tuple of boolean and DeviceInfo. If found boolean is true and DeviceInfo     describes the device. Otherwise false
static method
method
method
method
method
bootMemory(self, fw: ..., std: ...)
Boots a custom FW in memory  Parameter ``fw``:     $Throws:  A runtime exception if there are any communication issues
method
bootUsbRomBootloader(self)
Boots into integrated ROM bootloader in USB mode  Throws:     A runtime exception if there are any communication issues
method
close(self)
Closes the connection to device. Better alternative is the usage of context manager: `with depthai.DeviceBootloader(deviceInfo) as bootloader:`
method
method
flashBootHeader(self, memory: DeviceBootloader.Memory, frequency: int = -1, location: int = -1, dummyCycles: int = -1, offset: int = -1) -> tuple[bool, str]: tuple[bool, str]
Flash optimized boot header  Parameter ``memory``:     Which memory to flasht the header to  Parameter ``frequency``:     SPI specific parameter, frequency in MHz  Parameter ``location``:     Target location the header should boot to. Default to location of bootloader  Parameter ``dummyCycles``:     SPI specific parameter  Parameter ``offset``:     Offset in memory to flash the header to. Defaults to offset of boot header  Returns:     status as std::tuple<bool, std::string>
method
method
flashClear(self, memory: DeviceBootloader.Memory = ...) -> tuple[bool, str]: tuple[bool, str]
Clears flashed application on the device, by removing SBR boot structure Doesn't remove fast boot header capability to still boot the application
method
flashConfig(self, config: DeviceBootloader.Config, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> tuple[bool, str]: tuple[bool, str]
Flashes configuration to bootloader  Parameter ``configData``:     Configuration structure  Parameter ``memory``:     Optional - to which memory flash configuration  Parameter ``type``:     Optional - for which type of bootloader to flash configuration
method
flashConfigClear(self, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> tuple[bool, str]: tuple[bool, str]
Clears configuration data  Parameter ``memory``:     Optional - on which memory to clear configuration data  Parameter ``type``:     Optional - for which type of bootloader to clear configuration data
method
flashConfigData(self, configData: json, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> tuple[bool, str]: tuple[bool, str]
Flashes configuration data to bootloader  Parameter ``configData``:     Unstructured configuration data  Parameter ``memory``:     Optional - to which memory flash configuration  Parameter ``type``:     Optional - for which type of bootloader to flash configuration
method
flashConfigFile(self, configData: Path, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> tuple[bool, str]: tuple[bool, str]
Flashes configuration data to bootloader  Parameter ``configPath``:     Unstructured configuration data  Parameter ``memory``:     Optional - to which memory flash configuration  Parameter ``type``:     Optional - for which type of bootloader to flash configuration
method
method
method
flashFastBootHeader(self, memory: DeviceBootloader.Memory, frequency: int = -1, location: int = -1, dummyCycles: int = -1, offset: int = -1) -> tuple[bool, str]: tuple[bool, str]
Flash fast boot header. Application must already be present in flash, or location must be specified manually. Note - Can soft brick your device if firmware location changes.  Parameter ``memory``:     Which memory to flash the header to  Parameter ``frequency``:     SPI specific parameter, frequency in MHz  Parameter ``location``:     Target location the header should boot to. Default to location of bootloader  Parameter ``dummyCycles``:     SPI specific parameter  Parameter ``offset``:     Offset in memory to flash the header to. Defaults to offset of boot header  Returns:     status as std::tuple<bool, std::string>
method
flashGpioModeBootHeader(self, memory: DeviceBootloader.Memory, mode: int) -> tuple[bool, str]: tuple[bool, str]
Flash boot header which boots same as equivalent GPIO mode would  Parameter ``gpioMode``:     GPIO mode equivalent
method
flashUsbRecoveryBootHeader(self, memory: DeviceBootloader.Memory) -> tuple[bool, str]: tuple[bool, str]
Flash USB recovery boot header. Switches to USB ROM Bootloader  Parameter ``memory``:     Which memory to flash the header to
method
flashUserBootloader(self, progressCallback: typing.Callable [ [ float ] , None ], path: Path = '') -> tuple[bool, str]: tuple[bool, str]
Flashes user bootloader to the current board. Available for NETWORK bootloader type  Parameter ``progressCallback``:     Callback that sends back a value between 0..1 which signifies current     flashing progress  Parameter ``path``:     Optional parameter to custom bootloader to flash
method
getMemoryInfo(self, arg0: DeviceBootloader.Memory) -> DeviceBootloader.MemoryInfo: DeviceBootloader.MemoryInfo
Retrieves information about specified memory  Parameter ``memory``:     Specifies which memory to query
method
getType(self) -> DeviceBootloader.Type: DeviceBootloader.Type
Returns:     Type of currently connected bootloader
method
getVersion(self) -> Version: Version
Returns:     Version of current running bootloader
method
isAllowedFlashingBootloader(self) -> bool: bool
Returns:     True if allowed to flash bootloader
method
isEmbeddedVersion(self) -> bool: bool
Returns:     True when bootloader was booted using latest bootloader integrated in the     library. False when bootloader is already running on the device and just     connected to.
method
isUserBootloader(self) -> bool: bool
Retrieves whether current bootloader is User Bootloader (B out of A/B configuration)
method
isUserBootloaderSupported(self) -> bool: bool
Checks whether User Bootloader is supported with current bootloader  Returns:     true of User Bootloader is supported, false otherwise
method
readApplicationInfo(self, memory: DeviceBootloader.Memory) -> DeviceBootloader.ApplicationInfo: DeviceBootloader.ApplicationInfo
Reads information about flashed application in specified memory from device  Parameter ``memory``:     Specifies which memory to query
method
readConfig(self, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> DeviceBootloader.Config: DeviceBootloader.Config
Reads configuration from bootloader  Parameter ``memory``:     Optional - from which memory to read configuration  Parameter ``type``:     Optional - from which type of bootloader to read configuration  Returns:     Configuration structure
method
readConfigData(self, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> json: json
Reads configuration data from bootloader  Returns:     Unstructured configuration data  Parameter ``memory``:     Optional - from which memory to read configuration data  Parameter ``type``:     Optional - from which type of bootloader to read configuration data
method
class

depthai.DeviceBootloader.ApplicationInfo

class

depthai.DeviceBootloader.MemoryInfo

class

depthai.DeviceBootloader.UsbConfig

class

depthai.EdgeDetectorConfig(depthai.Buffer)

method
method
method
getConfigData(self) -> EdgeDetectorConfigData: EdgeDetectorConfigData
Retrieve configuration data for EdgeDetector  Returns:     EdgeDetectorConfigData: sobel filter horizontal and vertical 3x3 kernels
method
setSobelFilterKernels(self, horizontalKernel: list [ list [ int ] ], verticalKernel: list [ list [ int ] ])
Set sobel filter horizontal and vertical 3x3 kernels  Parameter ``horizontalKernel``:     Used for horizontal gradient computation in 3x3 Sobel filter  Parameter ``verticalKernel``:     Used for vertical gradient computation in 3x3 Sobel filter
class

depthai.EdgeDetectorConfigData

method
property
sobelFilterHorizontalKernel
Used for horizontal gradient computation in 3x3 Sobel filter Format - 3x3 matrix, 2nd column must be 0 Default - +1 0 -1; +2 0 -2; +1 0 -1
method
property
sobelFilterVerticalKernel
Used for vertical gradient computation in 3x3 Sobel filter Format - 3x3 matrix, 2nd row must be 0 Default - +1 +2 +1; 0 0 0; -1 -2 -1
method
class

depthai.EdgeDetectorProperties

property
initialConfig
Initial edge detector config
method
property
numFramesPool
Num frames in output pool
method
property
outputFrameSize
Maximum output frame size in bytes (eg: 300x300 BGR image -> 300*300*3 bytes)
method
class

depthai.EncodedFrame(depthai.Buffer)

class
FrameType
Members:    I    P    B    Unknown
class
Profile
Members:    JPEG    AVC    HEVC
method
method
method
getBitrate(self) -> int: int
Retrieves the encoding bitrate
method
getColorTemperature(self) -> int: int
Retrieves white-balance color temperature of the light source, in kelvins
method
method
method
getHeight(self) -> int: int
Retrieves image height in pixels
method
getInstanceNum(self) -> int: int
Retrieves instance number
method
getLensPosition(self) -> int: int
Retrieves lens position, range 0..255. Returns -1 if not available
method
getLensPositionRaw(self) -> float: float
Retrieves lens position, range 0.0f..1.0f. Returns -1 if not available
method
getLossless(self) -> bool: bool
Returns true if encoding is lossless (JPEG only)
method
getProfile(self) -> EncodedFrame.Profile: EncodedFrame.Profile
Retrieves the encoding profile (JPEG, AVC or HEVC)
method
getQuality(self) -> int: int
Retrieves the encoding quality
method
getSensitivity(self) -> int: int
Retrieves sensitivity, as an ISO value
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
method
method
getWidth(self) -> int: int
Retrieves image width in pixels
method
method
method
setHeight(self, height: int) -> EncodedFrame: EncodedFrame
Specifies frame height  Parameter ``height``:     frame height
method
setLossless(self, arg0: bool) -> EncodedFrame: EncodedFrame
Returns true if encoding is lossless (JPEG only)
method
setProfile(self, arg0: EncodedFrame.Profile) -> EncodedFrame: EncodedFrame
Retrieves the encoding profile (JPEG, AVC or HEVC)
method
method
method
method
setWidth(self, width: int) -> EncodedFrame: EncodedFrame
Specifies frame width  Parameter ``width``:     frame width
class

depthai.EventData

method
class

depthai.EventsManager

method
method
checkConnection(self) -> bool: bool
Check if the device is connected to Hub. Performs a simple GET request to the URL/health endpoint  Returns:     bool
method
sendEvent(self, name: str, imgFrame: ImgFrame = None, data: list [ EventData ] = [], tags: list [ str ] = [], extraData: dict [ str , str ] = {}, deviceSerialNo: str = '') -> bool: bool
Send an event to the events service  Parameter ``name``:     Name of the event  Parameter ``imgFrame``:     Image frame to send  Parameter ``data``:     List of EventData objects to send  Parameter ``tags``:     List of tags to send  Parameter ``extraData``:     Extra data to send  Parameter ``deviceSerialNo``:     Device serial number  Returns:     bool
method
sendSnap(self, name: str, imgFrame: ImgFrame = None, data: list [ EventData ] = [], tags: list [ str ] = [], extraData: dict [ str , str ] = {}, deviceSerialNo: str = '') -> bool: bool
Send a snap to the events service. Snaps should be used for sending images and other large files.  Parameter ``name``:     Name of the snap  Parameter ``imgFrame``:     Image frame to send  Parameter ``data``:     List of EventData objects to send  Parameter ``tags``:     List of tags to send  Parameter ``extraData``:     Extra data to send  Parameter ``deviceSerialNo``:     Device serial number  Returns:     bool
method
setCacheDir(self, cacheDir: str)
Set the cache directory for storing cached data. By default, the cache directory is set to /internal/private  Parameter ``cacheDir``:     Cache directory  Returns:     void
method
setCacheIfCannotSend(self, cacheIfCannotUpload: bool)
Set whether to cache data if it cannot be sent. By default, cacheIfCannotSend is set to false  Parameter ``cacheIfCannotSend``:     bool  Returns:     void
method
method
setLogResponse(self, logResponse: bool)
Set whether to log the responses from the server. By default, logResponse is set to false  Parameter ``logResponse``:     bool  Returns:     void
method
setQueueSize(self, queueSize: int)
Set the queue size for the amount of events that can be added and sent. By default, the queue size is set to 10  Parameter ``queueSize``:     Queue size  Returns:     void
method
setSourceAppId(self, sourceAppId: str)
Set the source app ID. By default, the source app ID is taken from the environment variable AGENT_APP_ID  Parameter ``sourceAppId``:     Source app ID  Returns:     void
method
setSourceAppIdentifier(self, sourceAppIdentifier: str)
Set the source app identifier. By default, the source app identifier is taken from the environment variable AGENT_APP_IDENTIFIER  Parameter ``sourceAppIdentifier``:     Source app identifier  Returns:     void
method
setToken(self, token: str)
Set the token for the events service. By default, the token is taken from the environment variable DEPTHAI_HUB_API_KEY  Parameter ``token``:     Token for the events service  Returns:     void
method
setUrl(self, url: str)
Set the URL of the events service. By default, the URL is set to https://events- ingest.cloud.luxonis.com  Parameter ``url``:     URL of the events service  Returns:     void
method
setVerifySsl(self, verifySsl: bool)
Set whether to verify the SSL certificate. By default, verifySsl is set to false  Parameter ``verifySsl``:     bool  Returns:     void
method
uploadCachedData(self)
Upload cached data to the events service  Returns:     void
class

depthai.FeatureTrackerConfig(depthai.Buffer)

class
CornerDetector
Corner detector configuration structure.
class
FeatureMaintainer
FeatureMaintainer configuration structure.
class
MotionEstimator
Used for feature reidentification between current and previous features.
method
method
method
method
method
setHwMotionEstimation(self) -> FeatureTrackerConfig: FeatureTrackerConfig
Set hardware accelerated motion estimation using block matching. Faster than optical flow (software implementation) but might not be as accurate.
method
method
setNumTargetFeatures(self, numTargetFeatures: int) -> FeatureTrackerConfig: FeatureTrackerConfig
Set number of target features to detect.  Parameter ``numTargetFeatures``:     Number of features
method
class

depthai.FeatureTrackerConfig.CornerDetector

class
Thresholds
Threshold settings structure for corner detector.
class
Type
Members:    HARRIS    SHI_THOMASI
method
property
cellGridDimension
Ensures distributed feature detection across the image. Image is divided into horizontal and vertical cells, each cell has a target feature count = numTargetFeatures / cellGridDimension. Each cell has its own feature threshold. A value of 4 means that the image is divided into 4x4 cells of equal width/height. Maximum 4, minimum 1.
method
property
enableSobel
Enable 3x3 Sobel operator to smoothen the image whose gradient is to be computed. If disabled, a simple 1D row/column differentiator is used for gradient.
method
property
enableSorting
Enable sorting detected features based on their score or not.
method
property
numMaxFeatures
Hard limit for the maximum number of features that can be detected. 0 means auto, will be set to the maximum value based on memory constraints.
method
property
numTargetFeatures
Target number of features to detect. Maximum number of features is determined at runtime based on algorithm type.
method
property
thresholds
Threshold settings. These are advanced settings, suitable for debugging/special cases.
method
property
type
Corner detector algorithm type.
method
class

depthai.FeatureTrackerConfig.CornerDetector.Thresholds

method
property
decreaseFactor
When detected number of features exceeds the maximum in a cell threshold is lowered by multiplying its value with this factor.
method
property
increaseFactor
When detected number of features doesn't exceed the maximum in a cell, threshold is increased by multiplying its value with this factor.
method
property
initialValue
Minimum strength of a feature which will be detected. 0 means automatic threshold update. Recommended so the tracker can adapt to different scenes/textures. Each cell has its own threshold. Empirical value.
method
property
max
Maximum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto. Empirical value.
method
property
min
Minimum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto, 6000000 for HARRIS, 1200 for SHI_THOMASI. Empirical value.
method
class

depthai.FeatureTrackerConfig.FeatureMaintainer

method
property
enable
Enable feature maintaining or not.
method
property
lostFeatureErrorThreshold
Optical flow measures the tracking error for every feature. If the point can’t be tracked or it’s out of the image it will set this error to a maximum value. This threshold defines the level where the tracking accuracy is considered too bad to keep the point.
method
property
minimumDistanceBetweenFeatures
Used to filter out detected feature points that are too close. Requires sorting enabled in detector. Unit of measurement is squared euclidean distance in pixels.
method
property
trackedFeatureThreshold
Once a feature was detected and we started tracking it, we need to update its Harris score on each image. This is needed because a feature point can disappear, or it can become too weak to be tracked. This threshold defines the point where such a feature must be dropped. As the goal of the algorithm is to provide longer tracks, we try to add strong points and track them until they are absolutely untrackable. This is why, this value is usually smaller than the detection threshold.
method
class

depthai.FeatureTrackerConfig.MotionEstimator

class
OpticalFlow
Optical flow configuration structure.
class
Type
Members:    LUCAS_KANADE_OPTICAL_FLOW    HW_MOTION_ESTIMATION
method
property
enable
Enable motion estimation or not.
method
property
opticalFlow
Optical flow configuration. Takes effect only if MotionEstimator algorithm type set to LUCAS_KANADE_OPTICAL_FLOW.
method
property
type
Motion estimator algorithm type.
method
class

depthai.FeatureTrackerConfig.MotionEstimator.OpticalFlow

method
property
epsilon
Feature tracking termination criteria. Optical flow will refine the feature position on each pyramid level until the displacement between two refinements is smaller than this value. Decreasing this number increases runtime.
method
property
maxIterations
Feature tracking termination criteria. Optical flow will refine the feature position maximum this many times on each pyramid level. If the Epsilon criteria described in the previous chapter is not met after this number of iterations, the algorithm will continue with the current calculated value. Increasing this number increases runtime.
method
property
pyramidLevels
Number of pyramid levels, only for optical flow. AUTO means it's decided based on input resolution: 3 if image width <= 640, else 4. Valid values are either 3/4 for VGA, 4 for 720p and above.
method
property
searchWindowHeight
Image patch height used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime
method
property
searchWindowWidth
Image patch width used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime
method
class

depthai.FeatureTrackerProperties

property
initialConfig
Initial feature tracker config
method
property
numMemorySlices
Number of memory slices reserved for feature tracking. Optical flow can use 1 or 2 memory slices, while for corner detection only 1 is enough. Maximum number of features depends on the number of allocated memory slices. Hardware motion estimation doesn't require memory slices. Maximum 2, minimum 1.
method
property
numShaves
Number of shaves reserved for feature tracking. Optical flow can use 1 or 2 shaves, while for corner detection only 1 is enough. Hardware motion estimation doesn't require shaves. Maximum 2, minimum 1.
method
class

depthai.GlobalProperties

variable
variable
variable
variable
property
cameraTuningBlobSize
Camera tuning blob size in bytes
method
property
cameraTuningBlobUri
Uri which points to camera tuning blob
method
property
sippBufferSize
SIPP (Signal Image Processing Pipeline) internal memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. By default memory is allocated in high speed CMX memory. Setting to 0 will allocate in DDR 256 kilobytes. Units are bytes.
method
property
sippDmaBufferSize
SIPP (Signal Image Processing Pipeline) internal DMA memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. Memory is allocated in high speed CMX memory Units are bytes.
method
property
xlinkChunkSize
Chunk size for splitting device-sent XLink packets, in bytes. A larger value could increase performance, with 0 disabling chunking. A negative value won't modify the device defaults - configured per protocol, currently 64*1024 for both USB and Ethernet.
method
class

depthai.IMUData(depthai.Buffer)

class

depthai.IMUReportAccelerometer(depthai.IMUReport)

variable
variable
variable
method
class

depthai.IMUReportGyroscope(depthai.IMUReport)

variable
variable
variable
method
class

depthai.IMUReportMagneticField(depthai.IMUReport)

variable
variable
variable
method
class

depthai.IMUReportRotationVectorWAcc(depthai.IMUReport)

class

depthai.ImageAlignConfig(depthai.Buffer)

method
method
property
staticDepthPlane
Optional static depth plane to align to, in depth units, by default millimeters
method
class

depthai.ImageAlignProperties

variable
property
alignHeight
Optional output height
method
property
alignWidth
Optional output width
method
property
interpolation
Interpolation type to use
method
property
numFramesPool
Num frames in output pool
method
property
numShaves
Number of shaves reserved.
method
property
outKeepAspectRatio
Whether to keep aspect ratio of the input or not
method
property
warpHwIds
Warp HW IDs to use, if empty, use auto/default
method
class

depthai.ImageManipConfig(depthai.Buffer)

class
ResizeMode
Members:    NONE    LETTERBOX    CENTER_CROP    STRETCH
method
method
method
method
addCropRotatedRect(self, rect: RotatedRect, normalizedCoords: bool) -> ImageManipConfig: ImageManipConfig
Crops the image to the specified (rotated) rectangle  Parameter ``rect``:     RotatedRect to crop  Parameter ``normalizedCoords``:     If true, the coordinates are normalized to range [0, 1] where 1 maps to the     width/height of the image
method
method
method
method
method
addTransformAffine(self, mat: typing.Annotated [ list [ float ] , pybind11_stubgen.typing_ext.FixedSize ( 4 ) ]) -> ImageManipConfig: ImageManipConfig
Applies an affine transformation to the image  Parameter ``matrix``:     an array containing a 2x2 matrix representing the affine transformation
method
addTransformFourPoints(self, src: typing.Annotated [ list [ Point2f ] , pybind11_stubgen.typing_ext.FixedSize ( 4 ) ], dst: typing.Annotated [ list [ Point2f ] , pybind11_stubgen.typing_ext.FixedSize ( 4 ) ], normalizedCoords: bool) -> ImageManipConfig: ImageManipConfig
Applies a perspective transformation to the image  Parameter ``src``:     Source points  Parameter ``dst``:     Destination points  Parameter ``normalizedCoords``:     If true, the coordinates are normalized to range [0, 1] where 1 maps to the     width/height of the image
method
addTransformPerspective(self, mat: typing.Annotated [ list [ float ] , pybind11_stubgen.typing_ext.FixedSize ( 9 ) ]) -> ImageManipConfig: ImageManipConfig
Applies a perspective transformation to the image  Parameter ``matrix``:     an array containing a 3x3 matrix representing the perspective transformation
method
clearOps(self) -> ImageManipConfig: ImageManipConfig
Removes all operations from the list (does not affect output configuration)
method
getUndistort(self) -> bool: bool
Gets the undistort flag  Returns:     True if undistort is enabled, false otherwise
method
method
method
setFrameType(self, type: ImgFrame.Type) -> ImageManipConfig: ImageManipConfig
Sets the frame type of the output image  Parameter ``frameType``:     Frame type of the output image
method
setOutputCenter(self, c: bool) -> ImageManipConfig: ImageManipConfig
Centers the content in the output image without resizing  Parameter ``c``:     True to center the content, false otherwise
method
setOutputSize(self, w: int, h: int, mode: ImageManipConfig.ResizeMode = ...) -> ImageManipConfig: ImageManipConfig
Sets the output size of the image  Parameter ``w``:     Width of the output image  Parameter ``h``:     Height of the output image  Parameter ``mode``:     Resize mode. NONE - no resize, STRETCH - stretch to fit, LETTERBOX - keep     aspect ratio and pad with background color, CENTER_CROP - keep aspect ratio     and crop
method
setReusePreviousImage(self, reuse: bool) -> ImageManipConfig: ImageManipConfig
Instruct ImageManip to not remove current image from its queue and use the same for next message.  Parameter ``reuse``:     True to enable reuse, false otherwise
method
setSkipCurrentImage(self, skip: bool) -> ImageManipConfig: ImageManipConfig
Instructs ImageManip to skip current image and wait for next in queue.  Parameter ``skip``:     True to skip current image, false otherwise
method
class

depthai.ImgAnnotations(depthai.Buffer)

variable
method
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
class

depthai.ImgDetection

class

depthai.ImgDetections(depthai.Buffer)

method
__init__(self)
Construct ImgDetections message.
method
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
method
method
property
detections
Detections
method
class

depthai.ImgFrame(depthai.Buffer)

class
class
Type
Members:    YUV422i    YUV444p    YUV420p    YUV422p    YUV400p    RGBA8888    RGB161616    RGB888p    BGR888p    RGB888i    BGR888i    RGBF16F16F16p    BGRF16F16F16p    RGBF16F16F16i    BGRF16F16F16i    GRAY8    GRAYF16    LUT2    LUT4    LUT16    RAW16    RAW14    RAW12    RAW10    RAW8    PACK10    PACK12    YUV444i    NV12    NV21    BITSTREAM    HDR    RAW32    NONE
method
method
method
getBytesPerPixel(self) -> float: float
Retrieves image bytes per pixel
method
getCategory(self) -> int: int
Retrieves image category
method
getColorTemperature(self) -> int: int
Retrieves white-balance color temperature of the light source, in kelvins
method
getCvFrame(self) -> numpy.ndarray: numpy.ndarray
@note This API only available if OpenCV support is enabled  Retrieves cv::Mat suitable for use in common opencv functions. ImgFrame is converted to color BGR interleaved or grayscale depending on type.  A copy is always made  Returns:     cv::Mat for use in opencv functions
method
method
getFrame(self) -> numpy.ndarray: numpy.ndarray
@note This API only available if OpenCV support is enabled  Retrieves data as cv::Mat with specified width, height and type  Parameter ``copy``:     If false only a reference to data is made, otherwise a copy  Returns:     cv::Mat with corresponding to ImgFrame parameters
method
getHeight(self) -> int: int
Retrieves image height in pixels
method
getInstanceNum(self) -> int: int
Retrieves instance number
method
getLensPosition(self) -> int: int
Retrieves lens position, range 0..255. Returns -1 if not available
method
getLensPositionRaw(self) -> float: float
Retrieves lens position, range 0.0f..1.0f. Returns -1 if not available
method
getPlaneHeight(self) -> int: int
Retrieves image plane height in lines
method
getPlaneStride(self, arg0: int) -> int: int
Retrieves image plane stride (offset to next plane) in bytes  Parameter ``current``:     plane index, 0 or 1
method
getSensitivity(self) -> int: int
Retrieves sensitivity, as an ISO value
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getSourceDFov(self) -> float: float
@note Fov API works correctly only on rectilinear frames Get the source diagonal field of view in degrees  Returns:     field of view in degrees
method
getSourceHFov(self) -> float: float
@note Fov API works correctly only on rectilinear frames Get the source horizontal field of view  Parameter ``degrees``:     field of view in degrees
method
getSourceHeight(self) -> int: int
Retrieves source image height in pixels
method
getSourceVFov(self) -> float: float
@note Fov API works correctly only on rectilinear frames Get the source vertical field of view  Parameter ``degrees``:     field of view in degrees
method
getSourceWidth(self) -> int: int
Retrieves source image width in pixels
method
getStride(self) -> int: int
Retrieves image line stride in bytes
method
method
method
method
method
getWidth(self) -> int: int
Retrieves image width in pixels
method
setCategory(self, category: int) -> ImgFrame: ImgFrame
Parameter ``category``:     Image category
method
setCvFrame(self, arg0: numpy.ndarray, arg1: ImgFrame.Type) -> ImgFrame: ImgFrame
@note This API only available if OpenCV support is enabled  Copies cv::Mat data to the ImgFrame buffer and converts to a specific type.  Parameter ``frame``:     Input cv::Mat BGR frame from which to copy the data
method
setFrame(self, arg0: numpy.ndarray) -> ImgFrame: ImgFrame
@note This API only available if OpenCV support is enabled  Copies cv::Mat data to ImgFrame buffer  Parameter ``frame``:     Input cv::Mat frame from which to copy the data
method
setHeight(self, height: int) -> ImgFrame: ImgFrame
Specifies frame height  Parameter ``height``:     frame height
method
setInstanceNum(self, instance: int) -> ImgFrame: ImgFrame
Instance number relates to the origin of the frame (which camera)  Parameter ``instance``:     Instance number
method
method
setStride(self, stride: int) -> ImgFrame: ImgFrame
Specifies frame stride  Parameter ``stride``:     frame stride
method
method
setType(self, type: ImgFrame.Type) -> ImgFrame: ImgFrame
Specifies frame type, RGB, BGR, ...  Parameter ``type``:     Type of image
method
setWidth(self, width: int) -> ImgFrame: ImgFrame
Specifies frame width  Parameter ``width``:     frame width
method
validateTransformations(self) -> bool: bool
Check that the image transformation match the image size  Returns:     true if the transformations are valid
class

depthai.ImgFrame.Specs

class

depthai.ImgFrame.Type

variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
method
method
method
method
method
method
method
method
method
method
property
property
class

depthai.ImgTransformation

method
method
method
addCrop(self, x: int, y: int, width: int, height: int) -> ImgTransformation: ImgTransformation
Add a crop transformation.  Parameter ``x``:     X coordinate of the top-left corner of the crop  Parameter ``y``:     Y coordinate of the top-left corner of the crop  Parameter ``width``:     Width of the crop  Parameter ``height``:     Height of the crop
method
method
method
addPadding(self, x: int, y: int, width: int, height: int) -> ImgTransformation: ImgTransformation
Add a pad transformation. Works like crop, but in reverse.  Parameter ``top``:     Padding on the top  Parameter ``bottom``:     Padding on the bottom  Parameter ``left``:     Padding on the left  Parameter ``right``:     Padding on the right
method
addRotation(self, angle: float, rotationPoint: Point2f) -> ImgTransformation: ImgTransformation
Add a rotation transformation.  Parameter ``angle``:     Angle in degrees  Parameter ``rotationPoint``:     Point around which to rotate
method
addScale(self, scaleX: float, scaleY: float) -> ImgTransformation: ImgTransformation
Add a scale transformation.  Parameter ``scaleX``:     Scale factor in the horizontal direction  Parameter ``scaleY``:     Scale factor in the vertical direction
method
method
method
getDistortionCoefficients(self) -> list[float]: list[float]
Retrieve the distortion coefficients of the source sensor  Returns:     vector of distortion coefficients
method
getDistortionModel(self) -> CameraModel: CameraModel
Retrieve the distortion model of the source sensor  Returns:     Distortion model
method
getDstMaskPt(self, x: int, y: int) -> bool: bool
Returns true if the point is inside the image region (not in the background region).
method
method
method
method
method
method
getSize(self) -> tuple[int, int]: tuple[int, int]
Retrieve the size of the frame. Should be equal to the size of the corresponding ImgFrame message.  Returns:     Size of the frame
method
method
method
getSourceSize(self) -> tuple[int, int]: tuple[int, int]
Retrieve the size of the source frame from which this frame was derived.  Returns:     Size of the frame
method
method
getSrcMaskPt(self, x: int, y: int) -> bool: bool
Returns true if the point is inside the transformed region of interest (determined by crops used).
method
method
invTransformPoint(self, point: Point2f) -> Point2f: Point2f
Transform a point from the current frame to the source frame.  Parameter ``point``:     Point to transform  Returns:     Transformed point
method
invTransformRect(self, rect: RotatedRect) -> RotatedRect: RotatedRect
Transform a rotated rect from the current frame to the source frame.  Parameter ``rect``:     Rectangle to transform  Returns:     Transformed rectangle
method
isValid(self) -> bool: bool
Check if the transformations are valid. The transformations are valid if the source frame size and the current frame size are set.
method
remapPointFrom(self, to: ImgTransformation, point: Point2f) -> Point2f: Point2f
Remap a point to this transformation from another. If the intrinsics are different (e.g. different camera), the function will also use the intrinsics to remap the point.  Parameter ``from``:     Transformation to remap from  Parameter ``point``:     Point to remap
method
remapPointTo(self, to: ImgTransformation, point: Point2f) -> Point2f: Point2f
Remap a point from this transformation to another. If the intrinsics are different (e.g. different camera), the function will also use the intrinsics to remap the point.  Parameter ``to``:     Transformation to remap to  Parameter ``point``:     Point to remap
method
remapRectFrom(self, to: ImgTransformation, rect: RotatedRect) -> RotatedRect: RotatedRect
Remap a rotated rect to this transformation from another. If the intrinsics are different (e.g. different camera), the function will also use the intrinsics to remap the rect.  Parameter ``from``:     Transformation to remap from  Parameter ``point``:     RotatedRect to remap
method
remapRectTo(self, to: ImgTransformation, rect: RotatedRect) -> RotatedRect: RotatedRect
Remap a rotated rect from this transformation to another. If the intrinsics are different (e.g. different camera), the function will also use the intrinsics to remap the rect.  Parameter ``to``:     Transformation to remap to  Parameter ``rect``:     RotatedRect to remap
method
method
method
method
transformPoint(self, point: Point2f) -> Point2f: Point2f
Transform a point from the source frame to the current frame.  Parameter ``point``:     Point to transform  Returns:     Transformed point
method
transformRect(self, rect: RotatedRect) -> RotatedRect: RotatedRect
Transform a rotated rect from the source frame to the current frame.  Parameter ``rect``:     Rectangle to transform  Returns:     Transformed rectangle
class

depthai.InputQueue

method
send(self, msg: ADatatype)
Send a message to the connected input  Parameter ``msg:``:     Message to send
class

depthai.LogMessage

class

depthai.MemoryInfo

class

depthai.MessageGroup(depthai.Buffer)

method
method
method
method
method
method
getIntervalNs(self) -> int: int
Retrieves interval between the first and the last message in the group.
method
getMessageNames(self) -> list[str]: list[str]
Gets the names of messages in the group
method
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
method
isSynced(self, arg0: int) -> bool: bool
True if all messages in the group are in the interval  Parameter ``thresholdNs``:     Maximal interval between messages
class

depthai.MessageQueue

exception
method
method
addCallback(self, callback: typing.Callable) -> int: int
Adds a callback on message received  Parameter ``callback``:     Callback function with queue name and message pointer  Returns:     Callback id
method
close(self)
Closes the queue and unblocks any waiting consumers or producers
method
front(self) -> ADatatype: ADatatype
Gets first message in the queue.  Returns:     Message of type T or nullptr if no message available
method
method
getAll(self) -> list[ADatatype]: list[ADatatype]
Block until at least one message in the queue. Then return all messages from the queue.  Returns:     Vector of messages which can either be of type T or nullptr
method
getBlocking(self) -> bool: bool
Gets current queue behavior when full (maxSize)  Returns:     True if blocking, false otherwise
method
getMaxSize(self) -> int: int
Gets queue maximum size  Returns:     Maximum queue size
method
method
getSize(self) -> int: int
Gets queue current size  Returns:     Current queue size
method
has(self) -> bool: bool
Check whether front of the queue has message of type T  Returns:     True if queue isn't empty and the first element is of type T, false     otherwise
method
isClosed(self) -> bool: bool
Check whether queue is closed
method
isFull(self) -> int: int
Gets whether queue is full  Returns:     True if queue is full, false otherwise
method
removeCallback(self, callbackId: int) -> bool: bool
Removes a callback  Parameter ``callbackId``:     Id of callback to be removed  Returns:     True if callback was removed, false otherwise
method
method
setBlocking(self, blocking: bool)
Sets queue behavior when full (maxSize)  Parameter ``blocking``:     Specifies if block or overwrite the oldest message in the queue
method
setMaxSize(self, maxSize: int)
Sets queue maximum size  Parameter ``maxSize``:     Specifies maximum number of messages in the queue @note If maxSize is     smaller than size, queue will not be truncated immediately, only after     messages are popped
method
setName(self, name: str)
Set the name of the queue
method
tryGet(self) -> ADatatype: ADatatype
Try to retrieve message T from queue. If message isn't of type T it returns nullptr  Returns:     Message of type T or nullptr if no message available
method
tryGetAll(self) -> list[ADatatype]: list[ADatatype]
Try to retrieve all messages in the queue.  Returns:     Vector of messages which can either be of type T or nullptr
method
trySend(self, msg: ADatatype) -> bool: bool
Tries sending a message  Parameter ``msg``:     message to send
class

depthai.MonoCameraProperties

class
SensorResolution
Select the camera sensor resolution: 1280×720, 1280×800, 640×400, 640×480, 1920×1200, ...  Members:    THE_720_P    THE_800_P    THE_400_P    THE_480_P    THE_1200_P    THE_4000X3000    THE_4224X3136
variable
variable
variable
variable
variable
variable
variable
variable
class

depthai.NNArchive

method
method
getBlob(self) -> OpenVINO.Blob|None: OpenVINO.Blob|None
Return a SuperVINO::Blob from the archive if getModelType() returns BLOB, nothing otherwise  Returns:     std::optional<OpenVINO::Blob>: Model blob
method
getConfig(self) -> nn_archive.v1.Config: nn_archive.v1.Config
Get NNArchive config.  Template parameter ``T:``:     Type of config to get  Returns:     const T&: Config
method
getConfigV1(self) -> nn_archive.v1.Config: nn_archive.v1.Config
Get NNArchive config.  Template parameter ``T:``:     Type of config to get  Returns:     const T&: Config
method
getInputHeight(self, index: int = 0) -> int|None: int|None
Get inputHeight of the model  Parameter ``index:``:     Index of input  Returns:     int: inputHeight
method
getInputSize(self, index: int = 0) -> tuple[int, int]|None: tuple[int, int]|None
Get inputSize of the model  Parameter ``index:``:     Index of input @note this function is only valid for models with NCHW and     NHWC input formats  Returns:     std::vector<std::pair<int, int>>: inputSize
method
getInputWidth(self, index: int = 0) -> int|None: int|None
Get inputWidth of the model  Parameter ``index:``:     Index of input  Returns:     int: inputWidth
method
getModelPath(self) -> str|None: str|None
Return a path to the model inside the archive if getModelType() returns OTHER or DLC, nothing otherwise  Returns:     std::optional<std::string>: Model path
method
getModelType(self) -> ModelType: ModelType
Get type of model contained in NNArchive  Returns:     model::ModelType: type of model in archive
method
getSuperBlob(self) -> OpenVINO.SuperBlob|None: OpenVINO.SuperBlob|None
Return a SuperVINO::Blob from the archive if getModelType() returns BLOB, nothing otherwise  Returns:     std::optional<OpenVINO::Blob>: Model blob
method
getSupportedPlatforms(self) -> list[Platform]: list[Platform]
Get supported platforms  Returns:     std::vector<dai::Platform>: Supported platforms
class

depthai.NNArchiveEntry

class
Compression
Members:    AUTO    RAW_FS    TAR    TAR_GZ    TAR_XZ
class
Seek
Members:    SET    CUR    END
class

depthai.NNArchiveOptions

class

depthai.NNArchiveVersionedConfig

method
method
getConfig(self) -> nn_archive.v1.Config: nn_archive.v1.Config
Get stored config cast to a specific version.  Template parameter ``T:``:     Config type to cast to.
method
getConfigV1(self) -> nn_archive.v1.Config: nn_archive.v1.Config
Get stored config cast to a specific version.  Template parameter ``T:``:     Config type to cast to.
method
class

depthai.NNData(depthai.Buffer)

method
method
method
method
getAllLayerNames(self) -> list[str]: list[str]
Returns:     Names of all layers added
method
getAllLayers(self) -> list[TensorInfo]: list[TensorInfo]
Returns:     All layers and their information
method
method
getLayerDatatype(self, name: str, datatype: TensorInfo.DataType) -> bool: bool
Retrieve datatype of a layers tensor  Parameter ``name``:     Name of the layer  Parameter ``datatype``:     Datatype of layers tensor  Returns:     True if layer exists, false otherwise
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
method
getTensorDatatype(self, name: str) -> TensorInfo.DataType: TensorInfo.DataType
Get the datatype of a given tensor  Returns:     TensorInfo::DataType tensor datatype
method
getTensorInfo(self, name: str) -> TensorInfo|None: TensorInfo|None
Retrieve tensor information  Parameter ``name``:     Name of the tensor  Returns:     Tensor information
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
method
method
hasLayer(self, name: str) -> bool: bool
Checks if given layer exists  Parameter ``name``:     Name of the layer  Returns:     True if layer exists, false otherwise
method
class

depthai.NNModelDescription

static method
NNModelDescription.fromYamlFile(yamlPath: str, modelsPath: str = '') -> NNModelDescription: NNModelDescription
Initialize NNModelDescription from yaml file If modelName is a relative path (e.g. ./yolo.yaml), it is used as is. If modelName is a full path (e.g. /home/user/models/yolo.yaml), it is used as is. If modelName is a model name (e.g. yolo) or a model yaml file (e.g. yolo.yaml), the function will use modelsPath if provided or the DEPTHAI_ZOO_MODELS_PATH environment variable and use a path made by combining the modelsPath and the model name to the yaml file. For instance, yolo -> ./depthai_models/yolo.yaml (if modelsPath or DEPTHAI_ZOO_MODELS_PATH are ./depthai_models)  Parameter ``modelName:``:     model name or yaml file path  Parameter ``modelsPath:``:     Path to the models folder, use environment variable DEPTHAI_ZOO_MODELS_PATH     if not provided  Returns:     NNModelDescription
method
method
__str__(self) -> str: str
Convert NNModelDescription to string for printing purposes. This can be used for debugging.  Returns:     std::string: String representation
method
check(self) -> bool: bool
Check if the model description is valid (contains all required fields)  Returns:     bool: True if the model description is valid, false otherwise
method
saveToYamlFile(self, yamlPath: str)
Save NNModelDescription to yaml file  Parameter ``yamlPath:``:     Path to yaml file
method
toString(self) -> str: str
Convert NNModelDescription to string for printing purposes. This can be used for debugging.  Returns:     std::string: String representation
property
compressionLevel
Compression level = OPTIONAL parameter
method
property
model
Model slug = REQUIRED parameter
method
property
modelPrecisionType
modelPrecisionType = OPTIONAL parameter
method
property
optimizationLevel
Optimization level = OPTIONAL parameter
method
property
platform
Hardware platform - RVC2, RVC3, RVC4, ... = REQUIRED parameter
method
property
snpeVersion
SNPE version = OPTIONAL parameter
method
class

depthai.NeuralNetworkProperties

class

depthai.Node

class
Connection
Connection between an Input and Output
class
class
Id
Node identificator. Unique for every node on a single Pipeline
class
class
class
class
method
add(self, node: Node)
Add existing node to nodeMap
method
method
method
method
method
getName(self) -> str: str
Retrieves nodes name
method
method
method
method
method
property
id
Id of node. Assigned after being placed on the pipeline
class

depthai.Node.Connection

class

depthai.Node.DatatypeHierarchy

class

depthai.Node.Input(depthai.MessageQueue)

class
Type
Members:    SReceiver    MReceiver
variable
method
method
createInputQueue(self, maxSize: int = 16, blocking: bool = False) -> InputQueue: InputQueue
Create an shared pointer to an input queue that can be used to send messages to this input from onhost  Parameter ``maxSize:``:     Maximum size of the input queue  Parameter ``blocking:``:     Whether the input queue should block when full  Returns:     std::shared_ptr<InputQueue>: shared pointer to an input queue
method
method
method
getReusePreviousMessage(self) -> bool: bool
Equivalent to getWaitForMessage but with inverted logic.
method
getWaitForMessage(self) -> bool: bool
Get behavior whether to wait for this input when a Node processes certain data or not  Returns:     Whether to wait for message to arrive to this input or not
method
method
setReusePreviousMessage(self, reusePreviousMessage: bool)
Equivalent to setWaitForMessage but with inverted logic.
method
setWaitForMessage(self, waitForMessage: bool)
Overrides default wait for message behavior. Applicable for nodes with multiple inputs. Specifies behavior whether to wait for this input when a Node processes certain data or not.  Parameter ``waitForMessage``:     Whether to wait for message to arrive to this input or not
class

depthai.Node.Output

class
Type
Members:    MSender    SSender
method
method
canConnect(self, input: Node.Input) -> bool: bool
Check if connection is possible  Parameter ``in``:     Input to connect to  Returns:     True if connection is possible, false otherwise
method
createOutputQueue(self, maxSize: int = 16, blocking: bool = False) -> MessageQueue: MessageQueue
Construct and return a shared pointer to an output message queue  Parameter ``maxSize:``:     Maximum size of the output queue  Parameter ``blocking:``:     Whether the output queue should block when full  Returns:     std::shared_ptr<dai::MessageQueue>: shared pointer to an output queue
method
method
method
isSamePipeline(self, input: Node.Input) -> bool: bool
Check if this output and given input are on the same pipeline.  See also:     canConnect for checking if connection is possible  Returns:     True if output and input are on the same pipeline
method
method
send(self, msg: ADatatype)
Sends a Message to all connected inputs  Parameter ``msg``:     Message to send to all connected inputs
method
method
trySend(self, msg: ADatatype) -> bool: bool
Try sending a message to all connected inputs  Parameter ``msg``:     Message to send to all connected inputs  Returns:     True if ALL connected inputs got the message, false otherwise
method
class

depthai.ObjectTrackerProperties

property
detectionLabelsToTrack
Which detections labels to track. Default all labels are tracked.
method
property
maxObjectsToTrack
Maximum number of objects to track. Maximum 60 for SHORT_TERM_KCF, maximum 1000 for other tracking methods. Default 60.
method
property
trackerIdAssignmentPolicy
New ID assignment policy.
method
property
trackerThreshold
Confidence threshold for tracklets. Above this threshold detections will be tracked. Default 0, all detections are tracked.
method
property
trackerType
Tracking method.
method
class

depthai.OpenVINO

class
Blob
OpenVINO Blob
class
Device
Members:    VPU    VPUX
class
SuperBlob
A superblob is an efficient way of storing generated blobs for all different number of shaves.
class
Version
OpenVINO Version supported version information  Members:    VERSION_2020_3    VERSION_2020_4    VERSION_2021_1    VERSION_2021_2    VERSION_2021_3    VERSION_2021_4    VERSION_2022_1    VERSION_UNIVERSAL
variable
variable
variable
variable
variable
variable
variable
variable
variable
static method
static method
OpenVINO.getBlobLatestSupportedVersion()
Returns latest potentially supported version by a given blob version.  Parameter ``majorVersion``:     Major version from OpenVINO blob  Parameter ``minorVersion``:     Minor version from OpenVINO blob  Returns:     Latest potentially supported version
static method
OpenVINO.getBlobSupportedVersions()
Returns a list of potentially supported versions for a specified blob major and minor versions.  Parameter ``majorVersion``:     Major version from OpenVINO blob  Parameter ``minorVersion``:     Minor version from OpenVINO blob  Returns:     Vector of potentially supported versions
static method
OpenVINO.getVersionName(version: OpenVINO.Version) -> str: str
Returns string representation of a given version  Parameter ``version``:     OpenVINO version  Returns:     Name of a given version
static method
static method
OpenVINO.parseVersionName(versionString: str) -> OpenVINO.Version: OpenVINO.Version
Creates Version from string representation. Throws if not possible.  Parameter ``versionString``:     Version as string  Returns:     Version object if successful
class

depthai.OpenVINO.Blob

method
property
data
Blob data
method
property
device
Device for which the blob is compiled for
method
property
networkInputs
Map of input names to additional information
method
property
networkOutputs
Map of output names to additional information
method
property
numShaves
Number of shaves the blob was compiled for
method
property
numSlices
Number of CMX slices the blob was compiled for
method
property
stageCount
Number of network stages
method
property
version
OpenVINO version
method
class

depthai.OpenVINO.SuperBlob

constant
method
method
getBlobWithNumShaves(self, numShaves: int) -> OpenVINO.Blob: OpenVINO.Blob
Generate a blob with a specific number of shaves  Parameter ``numShaves:``:     Number of shaves to generate the blob for. Must be between 1 and     NUMBER_OF_PATCHES.  Returns:     dai::OpenVINO::Blob: Blob compiled for the specified number of shaves
class

depthai.Pipeline

method
method
method
method
method
method
method
method
method
method
method
method
getCalibrationData(self) -> CalibrationHandler: CalibrationHandler
gets the calibration data which is set through pipeline  Returns:     the calibrationHandler with calib data in the pipeline
method
method
getDeviceConfig(self) -> Device.Config: Device.Config
Get device configuration needed for this pipeline
method
getGlobalProperties(self) -> GlobalProperties: GlobalProperties
Returns:     Global properties of current pipeline
method
getNode(self, arg0: int) -> Node: Node
Get node with id if it exists, nullptr otherwise
method
method
method
method
remove(self, node: Node)
Removes a node from pipeline
method
method
serializeToJson(self, arg0: bool) -> json: json
Returns whole pipeline represented as JSON
method
method
setCalibrationData(self, calibrationDataHandler: CalibrationHandler)
Sets the calibration in pipeline which overrides the calibration data in eeprom  Parameter ``calibrationDataHandler``:     CalibrationHandler object which is loaded with calibration information.
method
setCameraTuningBlobPath(self, path: Path)
Set a camera IQ (Image Quality) tuning blob, used for all cameras
method
setSippBufferSize(self, sizeBytes: int)
SIPP (Signal Image Processing Pipeline) internal memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. By default memory is allocated in high speed CMX memory. Setting to 0 will allocate in DDR 256 kilobytes. Units are bytes.
method
setSippDmaBufferSize(self, sizeBytes: int)
SIPP (Signal Image Processing Pipeline) internal DMA memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. Memory is allocated in high speed CMX memory Units are bytes.
method
setXLinkChunkSize(self, sizeBytes: int)
Set chunk size for splitting device-sent XLink packets, in bytes. A larger value could increase performance, with 0 disabling chunking. A negative value won't modify the device defaults - configured per protocol, currently 64*1024 for both USB and Ethernet.
method
method
method
class

depthai.Point2f

class

depthai.Point3d

variable
variable
variable
method
class

depthai.Point3f

variable
variable
variable
method
class

depthai.Point3fRGBA

variable
variable
variable
variable
variable
variable
variable
method
class

depthai.PointCloudConfig(depthai.Buffer)

method
method
method
getSparse(self) -> bool: bool
Retrieve sparse point cloud calculation status.  Returns:     true if sparse point cloud calculation is enabled, false otherwise
method
method
setSparse(self, arg0: bool) -> PointCloudConfig: PointCloudConfig
Enable or disable sparse point cloud calculation.  Parameter ``enable``:
method
class

depthai.PointCloudData(depthai.Buffer)

method
method
method
getHeight(self) -> int: int
Retrieves the height in pixels - in case of a sparse point cloud, this represents the hight of the frame which was used to generate the point cloud
method
getInstanceNum(self) -> int: int
Retrieves instance number
method
getMaxX(self) -> float: float
Retrieves maximal x coordinate in depth units (millimeter by default)
method
getMaxY(self) -> float: float
Retrieves maximal y coordinate in depth units (millimeter by default)
method
getMaxZ(self) -> float: float
Retrieves maximal z coordinate in depth units (millimeter by default)
method
getMinX(self) -> float: float
Retrieves minimal x coordinate in depth units (millimeter by default)
method
getMinY(self) -> float: float
Retrieves minimal y coordinate in depth units (millimeter by default)
method
getMinZ(self) -> float: float
Retrieves minimal z coordinate in depth units (millimeter by default)
method
method
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
method
getWidth(self) -> int: int
Retrieves the height in pixels - in case of a sparse point cloud, this represents the hight of the frame which was used to generate the point cloud
method
isColor(self) -> bool: bool
Retrieves whether point cloud is color
method
isSparse(self) -> bool: bool
Retrieves whether point cloud is sparse
method
setHeight(self, arg0: int) -> PointCloudData: PointCloudData
Specifies frame height  Parameter ``height``:     frame height
method
setInstanceNum(self, arg0: int) -> PointCloudData: PointCloudData
Specifies instance number  Parameter ``instanceNum``:     instance number
method
setMaxX(self, arg0: float) -> PointCloudData: PointCloudData
Specifies maximal x coordinate in depth units (millimeter by default)  Parameter ``val``:     maximal x coordinate in depth units (millimeter by default)
method
setMaxY(self, arg0: float) -> PointCloudData: PointCloudData
Specifies maximal y coordinate in depth units (millimeter by default)  Parameter ``val``:     maximal y coordinate in depth units (millimeter by default)
method
setMaxZ(self, arg0: float) -> PointCloudData: PointCloudData
Specifies maximal z coordinate in depth units (millimeter by default)  Parameter ``val``:     maximal z coordinate in depth units (millimeter by default)
method
setMinX(self, arg0: float) -> PointCloudData: PointCloudData
Specifies minimal x coordinate in depth units (millimeter by default)  Parameter ``val``:     minimal x coordinate in depth units (millimeter by default)
method
setMinY(self, arg0: float) -> PointCloudData: PointCloudData
Specifies minimal y coordinate in depth units (millimeter by default)  Parameter ``val``:     minimal y coordinate in depth units (millimeter by default)
method
setMinZ(self, arg0: float) -> PointCloudData: PointCloudData
Specifies minimal z coordinate in depth units (millimeter by default)  Parameter ``val``:     minimal z coordinate in depth units (millimeter by default)
method
method
setWidth(self, arg0: int) -> PointCloudData: PointCloudData
Specifies frame width  Parameter ``width``:     frame width
class

depthai.PointCloudProperties

class

depthai.ProfilingData

class

depthai.Quaterniond

variable
variable
variable
variable
method
class

depthai.RecordConfig.VideoEncoding

class

depthai.Rect

variable
variable
variable
variable
method
method
area(self) -> float: float
Area (width*height) of the rectangle
method
method
contains(self, arg0: Point2f) -> bool: bool
Checks whether the rectangle contains the point.
method
denormalize(self, width: int, height: int) -> Rect: Rect
Denormalize rectangle.  Parameter ``destWidth``:     Destination frame width.  Parameter ``destHeight``:     Destination frame height.
method
empty(self) -> bool: bool
True if rectangle is empty.
method
isNormalized(self) -> bool: bool
Whether rectangle is normalized (coordinates in [0,1] range) or not.
method
normalize(self, width: int, height: int) -> Rect: Rect
Normalize rectangle.  Parameter ``srcWidth``:     Source frame width.  Parameter ``srcHeight``:     Source frame height.
method
size(self) -> Size2f: Size2f
Size (width, height) of the rectangle
method
class

depthai.RemoteConnection

method
__init__(self, address: str = '0.0.0.0', webSocketPort: int = 8765, serveFrontend: bool = True, httpPort: int = 8080)
Constructs a RemoteConnection instance.  Parameter ``address``:     The address to bind the connection to.  Parameter ``webSocketPort``:     The port for WebSocket communication.  Parameter ``serveFrontend``:     Whether to serve a frontend UI.  Parameter ``httpPort``:     The port for HTTP communication.
method
method
registerPipeline(self, pipeline: Pipeline)
Registers a pipeline with the remote connection.  Parameter ``pipeline``:     The pipeline to register.
method
registerService(self, serviceName: str, callback: typing.Callable [ [ json ] , json ])
Registers a service with a callback function.  Parameter ``serviceName``:     The name of the service.  Parameter ``callback``:     The callback function to handle requests.
method
removeTopic(self, topicName: str) -> bool: bool
Removes a topic from the remote connection.  Parameter ``topicName``:     The name of the topic to remove. @note After removing a topic any messages     sent to it will cause an exception to be called on the sender, since this     closes the queue.  Returns:     True if the topic was successfully removed, false otherwise.
method
waitKey(self, delay: int) -> int: int
Waits for a key event.  Parameter ``delayMs``:     The delay in milliseconds to wait for a key press.  Returns:     The key code of the pressed key.
class

depthai.RotatedRect

variable
variable
variable
method
method
denormalize(self, width: int, height: int) -> RotatedRect: RotatedRect
Denormalize the rotated rectangle. The denormalized rectangle will have center and size coordinates in range [0, width] and [0, height]  Returns:     Denormalized rotated rectangle
method
method
method
method
normalize(self, width: int, height: int) -> RotatedRect: RotatedRect
Normalize the rotated rectangle. The normalized rectangle will have center and size coordinates in range [0,1]  Returns:     Normalized rotated rectangle
class

depthai.SPIInProperties

class

depthai.SPIOutProperties

class

depthai.ScriptProperties

property
processor
Which processor should execute the script
method
property
scriptName
Name of script
method
property
scriptUri
Uri which points to actual script
method
class

depthai.Size2f

class

depthai.SpatialDetectionNetworkProperties

class

depthai.SpatialImgDetection(depthai.ImgDetection)

class

depthai.SpatialImgDetections(depthai.Buffer)

variable
method
method
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
method
method
class

depthai.SpatialLocationCalculatorConfig(depthai.Buffer)

method
method
method
addROI(self, ROI: SpatialLocationCalculatorConfigData)
Add a new ROI to configuration data.  Parameter ``roi``:     Configuration parameters for ROI (region of interest)
method
getConfigData(self) -> list[SpatialLocationCalculatorConfigData]: list[SpatialLocationCalculatorConfigData]
Retrieve configuration data for SpatialLocationCalculator  Returns:     Vector of configuration parameters for ROIs (region of interests)
method
setROIs(self, ROIs: list [ SpatialLocationCalculatorConfigData ])
Set a vector of ROIs as configuration data.  Parameter ``ROIs``:     Vector of configuration parameters for ROIs (region of interests)
class

depthai.SpatialLocationCalculatorConfigData

method
property
calculationAlgorithm
Calculation method used to obtain spatial locations Average/mean: the average of ROI is used for calculation. Min: the minimum value inside ROI is used for calculation. Max: the maximum value inside ROI is used for calculation. Mode: the most frequent value inside ROI is used for calculation. Median: the median value inside ROI is used for calculation. Default: median.
method
property
depthThresholds
Upper and lower thresholds for depth values to take into consideration.
method
property
roi
Region of interest for spatial location calculation.
method
class

depthai.SpatialLocationCalculatorConfigThresholds

class

depthai.SpatialLocationCalculatorData(depthai.Buffer)

variable
method
method
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getSpatialLocations(self) -> list[SpatialLocations]: list[SpatialLocations]
Retrieve configuration data for SpatialLocationCalculatorData.  Returns:     Vector of spatial location data, carrying spatial information (X,Y,Z)
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
class

depthai.SpatialLocationCalculatorProperties

class

depthai.SpatialLocations

method
property
config
Configuration for selected ROI
method
property
depthAverage
Average of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to AVERAGE or MIN oR MAX.
method
property
depthAveragePixelCount
Number of depth values used in calculations.
method
property
depthMax
Maximum of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to AVERAGE or MIN oR MAX.
method
property
depthMedian
Median of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to MEDIAN.
method
property
depthMin
Minimum of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to AVERAGE or MIN oR MAX.
method
property
depthMode
Most frequent of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to MODE.
method
property
spatialCoordinates
Spatial coordinates - x,y,z; x,y are the relative positions of the center of ROI to the center of depth map
method
class

depthai.StereoDepthConfig(depthai.Buffer)

class
class
CensusTransform
The basic cost function used by the Stereo Accelerator for matching the left and right images is the Census Transform. It works on a block of pixels and computes a bit vector which represents the structure of the image in that block. There are two types of Census Transform based on how the middle pixel is used: Classic Approach and Modified Census. The comparisons that are made between pixels can be or not thresholded. In some cases a mask can be applied to filter out only specific bits from the entire bit stream. All these approaches are: Classic Approach: Uses middle pixel to compare against all its neighbors over a defined window. Each comparison results in a new bit, that is 0 if central pixel is smaller, or 1 if is it bigger than its neighbor. Modified Census Transform: same as classic Census Transform, but instead of comparing central pixel with its neighbors, the window mean will be compared with each pixel over the window. Thresholding Census Transform: same as classic Census Transform, but it is not enough that a neighbor pixel to be bigger than the central pixel, it must be significant bigger (based on a threshold). Census Transform with Mask: same as classic Census Transform, but in this case not all of the pixel from the support window are part of the binary descriptor. We use a ma sk “M” to define which pixels are part of the binary descriptor (1), and which pixels should be skipped (0).
class
class
CostAggregation
Cost Aggregation is based on Semi Global Block Matching (SGBM). This algorithm uses a semi global technique to aggregate the cost map. Ultimately the idea is to build inertia into the stereo algorithm. If a pixel has very little texture information, then odds are the correct disparity for this pixel is close to that of the previous pixel considered. This means that we get improved results in areas with low texture.
class
CostMatching
The matching cost is way of measuring the similarity of image locations in stereo correspondence algorithm. Based on the configuration parameters and based on the descriptor type, a linear equation is applied to computing the cost for each candidate disparity at each pixel.
class
MedianFilter
Median filter config  Members:    MEDIAN_OFF    KERNEL_3x3    KERNEL_5x5    KERNEL_7x7
class
PostProcessing
Post-processing filters, all the filters are applied in disparity domain.
method
method
method
getBilateralFilterSigma(self) -> int: int
Get sigma value for 5x5 bilateral filter
method
getConfidenceThreshold(self) -> int: int
Get confidence threshold for disparity calculation
method
method
getExtendedDisparity(self) -> bool: bool
Get extended disparity setting
method
method
getLeftRightCheck(self) -> bool: bool
Get left-right check setting
method
getLeftRightCheckThreshold(self) -> int: int
Get threshold for left-right check combine
method
getMaxDisparity(self) -> float: float
Useful for normalization of the disparity map.  Returns:     Maximum disparity value that the node can return
method
method
method
getSubpixelFractionalBits(self) -> int: int
Get number of fractional bits for subpixel mode
method
setBilateralFilterSigma(self, sigma: int) -> StereoDepthConfig: StereoDepthConfig
A larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together, resulting in larger areas of semi-equal color.  Parameter ``sigma``:     Set sigma value for 5x5 bilateral filter. 0..65535
method
setConfidenceThreshold(self, confThr: int) -> StereoDepthConfig: StereoDepthConfig
Confidence threshold for disparity calculation  Parameter ``confThr``:     Confidence threshold value 0..255
method
setDepthAlign(self, align: StereoDepthConfig.AlgorithmControl.DepthAlign) -> StereoDepthConfig: StereoDepthConfig
Parameter ``align``:     Set the disparity/depth alignment: centered (between the 'left' and 'right'     inputs), or from the perspective of a rectified output stream
method
setDepthUnit(self, arg0: StereoDepthConfig.AlgorithmControl.DepthUnit) -> StereoDepthConfig: StereoDepthConfig
Set depth unit of depth map.  Meter, centimeter, millimeter, inch, foot or custom unit is available.
method
setDisparityShift(self, arg0: int) -> StereoDepthConfig: StereoDepthConfig
Shift input frame by a number of pixels to increase minimum depth. For example shifting by 48 will change effective disparity search range from (0,95] to [48,143]. An alternative approach to reducing the minZ. We normally only recommend doing this when it is known that there will be no objects farther away than MaxZ, such as having a depth camera mounted above a table pointing down at the table surface.
method
setExtendedDisparity(self, enable: bool) -> StereoDepthConfig: StereoDepthConfig
Disparity range increased from 95 to 190, combined from full resolution and downscaled images. Suitable for short range objects
method
method
setLeftRightCheck(self, enable: bool) -> StereoDepthConfig: StereoDepthConfig
Computes and combines disparities in both L-R and R-L directions, and combine them.  For better occlusion handling, discarding invalid disparity values
method
setLeftRightCheckThreshold(self, sigma: int) -> StereoDepthConfig: StereoDepthConfig
Parameter ``threshold``:     Set threshold for left-right, right-left disparity map combine, 0..255
method
setMedianFilter(self, median: MedianFilter) -> StereoDepthConfig: StereoDepthConfig
Parameter ``median``:     Set kernel size for disparity/depth median filtering, or disable
method
setNumInvalidateEdgePixels(self, arg0: int) -> StereoDepthConfig: StereoDepthConfig
Invalidate X amount of pixels at the edge of disparity frame. For right and center alignment X pixels will be invalidated from the right edge, for left alignment from the left edge.
method
setSubpixel(self, enable: bool) -> StereoDepthConfig: StereoDepthConfig
Computes disparity with sub-pixel interpolation (3 fractional bits by default).  Suitable for long range. Currently incompatible with extended disparity
method
setSubpixelFractionalBits(self, subpixelFractionalBits: int) -> StereoDepthConfig: StereoDepthConfig
Number of fractional bits for subpixel mode. Default value: 3. Valid values: 3,4,5. Defines the number of fractional disparities: 2^x. Median filter postprocessing is supported only for 3 fractional bits.
property
algorithmControl
Controls the flow of stereo algorithm - left-right check, subpixel etc.
method
property
censusTransform
Census transform settings.
method
property
confidenceMetrics
Confidence metrics settings.
method
property
costAggregation
Cost aggregation settings.
method
property
costMatching
Cost matching settings.
method
property
postProcessing
Controls the postprocessing of disparity and/or depth map.
method
class

depthai.StereoDepthConfig.AlgorithmControl

class
DepthAlign
Align the disparity/depth to the perspective of a rectified output, or center it  Members:    RECTIFIED_RIGHT :     RECTIFIED_LEFT :     CENTER : 
class
DepthUnit
Measurement unit for depth data  Members:    METER :     CENTIMETER :     MILLIMETER :     INCH :     FOOT :     CUSTOM : 
variable
method
property
customDepthUnitMultiplier
Custom depth unit multiplier, if custom depth unit is enabled, relative to 1 meter. A multiplier of 1000 effectively means depth unit in millimeter.
method
property
depthAlign
Set the disparity/depth alignment to the perspective of a rectified output, or center it
method
property
depthUnit
Measurement unit for depth data. Depth data is integer value, multiple of depth unit.
method
property
disparityShift
Shift input frame by a number of pixels to increase minimum depth. For example shifting by 48 will change effective disparity search range from (0,95] to [48,143]. An alternative approach to reducing the minZ. We normally only recommend doing this when it is known that there will be no objects farther away than MaxZ, such as having a depth camera mounted above a table pointing down at the table surface.
method
property
enableExtended
Disparity range increased from 95 to 190, combined from full resolution and downscaled images. Suitable for short range objects
method
property
enableLeftRightCheck
Computes and combines disparities in both L-R and R-L directions, and combine them. For better occlusion handling
method
property
enableSubpixel
Computes disparity with sub-pixel interpolation (5 fractional bits), suitable for long range
method
property
enableSwLeftRightCheck
Enables software left right check. Applicable to RVC4 only.
method
property
leftRightCheckThreshold
Left-right check threshold for left-right, right-left disparity map combine, 0..128 Used only when left-right check mode is enabled. Defines the maximum difference between the confidence of pixels from left-right and right-left confidence maps
method
property
numInvalidateEdgePixels
Invalidate X amount of pixels at the edge of disparity frame. For right and center alignment X pixels will be invalidated from the right edge, for left alignment from the left edge.
method
property
subpixelFractionalBits
Number of fractional bits for subpixel mode  Valid values: 3,4,5  Defines the number of fractional disparities: 2^x  Median filter postprocessing is supported only for 3 fractional bits
method
class

depthai.StereoDepthConfig.CensusTransform

class
KernelSize
Census transform kernel size possible values.  Members:    AUTO :     KERNEL_5x5 :     KERNEL_7x7 :     KERNEL_7x9 : 
method
property
enableMeanMode
If enabled, each pixel in the window is compared with the mean window value instead of the central pixel.
method
property
kernelMask
Census transform mask, default - auto, mask is set based on resolution and kernel size. Disabled for 400p input resolution. Enabled for 720p. 0XA82415 for 5x5 census transform kernel. 0XAA02A8154055 for 7x7 census transform kernel. 0X2AA00AA805540155 for 7x9 census transform kernel. Empirical values.
method
property
kernelSize
Census transform kernel size.
method
property
noiseThresholdOffset
Used to reduce small fixed levels of noise across all luminance values in the current image. Valid range is [0,127]. Default value is 0.
method
property
noiseThresholdScale
Used to reduce noise values that increase with luminance in the current image. Valid range is [-128,127]. Default value is 0.
method
property
threshold
Census transform comparison threshold value.
method
class

depthai.StereoDepthConfig.ConfidenceMetrics

method
property
flatnessConfidenceThreshold
Threshold for flatness check in SGM block. Valid range is [1,7].
method
property
flatnessConfidenceWeight
Weight used with flatness estimation to generate final confidence map. Valid range is [0,32].
method
property
flatnessOverride
Flag to indicate whether final confidence value will be overidden by flatness value. Valid range is {true,false}.
method
property
motionVectorConfidenceThreshold
Threshold offset for MV variance in confidence generation. A value of 0 allows most variance. Valid range is [0,3].
method
property
motionVectorConfidenceWeight
Weight used with local neighborhood motion vector variance estimation to generate final confidence map. Valid range is [0,32].
method
property
occlusionConfidenceWeight
Weight used with occlusion estimation to generate final confidence map. Valid range is [0,32]
method
class

depthai.StereoDepthConfig.CostAggregation

class
P1Config
Structure for adaptive P1 penalty configuration.
class
P2Config
Structure for adaptive P2 penalty configuration.
variable
variable
method
property
divisionFactor
Cost calculation linear equation parameters.
method
property
horizontalPenaltyCostP1
Horizontal P1 penalty cost parameter.
method
property
horizontalPenaltyCostP2
Horizontal P2 penalty cost parameter.
method
property
verticalPenaltyCostP1
Vertical P1 penalty cost parameter.
method
property
verticalPenaltyCostP2
Vertical P2 penalty cost parameter.
method
class

depthai.StereoDepthConfig.CostAggregation.P1Config

method
property
defaultValue
Used as the default penalty value when nAdapEnable is disabled. A bigger value enforces higher smoothness and reduced noise at the cost of lower edge accuracy. This value must be smaller than P2 default penalty. Valid range is [10,50].
method
property
edgeThreshold
Threshold value on edges when nAdapEnable is enabled. A bigger value permits higher neighboring feature dissimilarity tolerance. This value is shared with P2 penalty configuration. Valid range is [8,16].
method
property
edgeValue
Penalty value on edges when nAdapEnable is enabled. A smaller penalty value permits higher change in disparity. This value must be smaller than or equal to P2 edge penalty. Valid range is [10,50].
method
property
enableAdaptive
Used to disable/enable adaptive penalty.
method
property
smoothThreshold
Threshold value on low texture regions when nAdapEnable is enabled. A bigger value permits higher neighboring feature dissimilarity tolerance. This value is shared with P2 penalty configuration. Valid range is [2,12].
method
property
smoothValue
Penalty value on low texture regions when nAdapEnable is enabled. A smaller penalty value permits higher change in disparity. This value must be smaller than or equal to P2 smoothness penalty. Valid range is [10,50].
method
class

depthai.StereoDepthConfig.CostAggregation.P2Config

method
property
defaultValue
Used as the default penalty value when nAdapEnable is disabled. A bigger value enforces higher smoothness and reduced noise at the cost of lower edge accuracy. This value must be larger than P1 default penalty. Valid range is [20,100].
method
property
edgeValue
Penalty value on edges when nAdapEnable is enabled. A smaller penalty value permits higher change in disparity. This value must be larger than or equal to P1 edge penalty. Valid range is [20,100].
method
property
enableAdaptive
Used to disable/enable adaptive penalty.
method
property
smoothValue
Penalty value on low texture regions when nAdapEnable is enabled. A smaller penalty value permits higher change in disparity. This value must be larger than or equal to P1 smoothness penalty. Valid range is [20,100].
method
class

depthai.StereoDepthConfig.CostMatching

class
DisparityWidth
Disparity search range: 64 or 96 pixels are supported by the HW.  Members:    DISPARITY_64 :     DISPARITY_96 : 
class
LinearEquationParameters
The linear equation applied for computing the cost is: COMB_COST = α*AD + β*(CTC<<3). CLAMP(COMB_COST >> 5, threshold). Where AD is the Absolute Difference between 2 pixels values. CTC is the Census Transform Cost between 2 pixels, based on Hamming distance (xor). The α and β parameters are subject to fine tuning by the user.
method
property
confidenceThreshold
Disparities with confidence value over this threshold are accepted.
method
property
disparityWidth
Disparity search range, default 96 pixels.
method
property
enableCompanding
Disparity companding using sparse matching. Matching pixel by pixel for N disparities. Matching every 2nd pixel for M disparitites. Matching every 4th pixel for T disparities. In case of 96 disparities: N=48, M=32, T=16. This way the search range is extended to 176 disparities, by sparse matching. Note: when enabling this flag only depth map will be affected, disparity map is not.
method
property
enableSwConfidenceThresholding
Enable software confidence thresholding. Applicable to RVC4 only.
method
property
invalidDisparityValue
Used only for debug purposes, SW postprocessing handled only invalid value of 0 properly.
method
property
linearEquationParameters
Cost calculation linear equation parameters.
method
class

depthai.StereoDepthConfig.CostMatching.LinearEquationParameters

class

depthai.StereoDepthConfig.PostProcessing

class
class
BrightnessFilter
Brightness filtering. If input frame pixel is too dark or too bright, disparity will be invalidated. The idea is that for too dark/too bright pixels we have low confidence, since that area was under/over exposed and details were lost.
class
DecimationFilter
Decimation filter. Reduces the depth scene complexity. The filter runs on kernel sizes [2x2] to [8x8] pixels.
class
class
SpatialFilter
1D edge-preserving spatial filter using high-order domain transform.
class
SpeckleFilter
Speckle filtering. Removes speckle noise.
class
TemporalFilter
Temporal filtering with optional persistence.
class
ThresholdFilter
Threshold filtering. Filters out distances outside of a given interval.
variable
variable
method
property
bilateralSigmaValue
Sigma value for bilateral filter. 0 means disabled. A larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together.
method
property
brightnessFilter
Brightness filtering. If input frame pixel is too dark or too bright, disparity will be invalidated. The idea is that for too dark/too bright pixels we have low confidence, since that area was under/over exposed and details were lost.
method
property
decimationFilter
Decimation filter. Reduces disparity/depth map x/y complexity, reducing runtime complexity for other filters.
method
property
filteringOrder
Order of filters to be applied if filtering is enabled.
method
property
median
Set kernel size for disparity/depth median filtering, or disable
method
property
spatialFilter
Edge-preserving filtering: This type of filter will smooth the depth noise while attempting to preserve edges.
method
property
speckleFilter
Speckle filtering. Removes speckle noise.
method
property
temporalFilter
Temporal filtering with optional persistence.
method
property
thresholdFilter
Threshold filtering. Filters out distances outside of a given interval.
method
class

depthai.StereoDepthConfig.PostProcessing.AdaptiveMedianFilter

method
property
confidenceThreshold
Confidence threshold for adaptive median filtering. Should be less than nFillConfThresh value used in evaDfsHoleFillConfig. Valid range is [0,255].
method
property
enable
Flag to enable adaptive median filtering for a final pass of filtering on low confidence pixels.
method
class

depthai.StereoDepthConfig.PostProcessing.BrightnessFilter

method
property
maxBrightness
Maximum range in depth units. If input pixel is less or equal than this value the depth value is invalidated.
method
property
minBrightness
Minimum pixel brightness. If input pixel is less or equal than this value the depth value is invalidated.
method
class

depthai.StereoDepthConfig.PostProcessing.HoleFilling

method
property
enable
Flag to enable post-processing hole-filling.
method
property
fillConfidenceThreshold
Pixels with confidence below this value will be filled with the average disparity of their corresponding superpixel. Valid range is [1,255].
method
property
highConfidenceThreshold
Pixels with confidence higher than this value are used to calculate an average disparity per superpixel. Valid range is [1,255]
method
property
invalidateDisparities
If enabled, sets to 0 the disparity of pixels with confidence below nFillConfThresh, which did not pass nMinValidPixels criteria. Valid range is {true, false}.
method
property
minValidDisparity
Represents the required percentange of pixels with confidence value above nHighConfThresh that are used to calculate average disparity per superpixel, where 1 means 50% or half, 2 means 25% or a quarter and 3 means 12.5% or an eighth. If the required number of pixels are not found, the holes will not be filled.
method
class

depthai.StereoDepthConfig.PostProcessing.SpatialFilter

method
property
alpha
The Alpha factor in an exponential moving average with Alpha=1 - no filter. Alpha = 0 - infinite filter. Determines the amount of smoothing.
method
property
delta
Step-size boundary. Establishes the threshold used to preserve "edges". If the disparity value between neighboring pixels exceed the disparity threshold set by this delta parameter, then filtering will be temporarily disabled. Default value 0 means auto: 3 disparity integer levels. In case of subpixel mode it's 3*number of subpixel levels.
method
property
enable
Whether to enable or disable the filter.
method
property
holeFillingRadius
An in-place heuristic symmetric hole-filling mode applied horizontally during the filter passes. Intended to rectify minor artefacts with minimal performance impact. Search radius for hole filling.
method
property
numIterations
Number of iterations over the image in both horizontal and vertical direction.
method
class

depthai.StereoDepthConfig.PostProcessing.SpeckleFilter

method
property
differenceThreshold
Maximum difference between neighbor disparity pixels to put them into the same blob. Units in disparity integer levels.
method
property
enable
Whether to enable or disable the filter.
method
property
speckleRange
Speckle search range.
method
class

depthai.StereoDepthConfig.PostProcessing.TemporalFilter

class
PersistencyMode
Persistency algorithm type.  Members:    PERSISTENCY_OFF :     VALID_8_OUT_OF_8 :     VALID_2_IN_LAST_3 :     VALID_2_IN_LAST_4 :     VALID_2_OUT_OF_8 :     VALID_1_IN_LAST_2 :     VALID_1_IN_LAST_5 :     VALID_1_IN_LAST_8 :     PERSISTENCY_INDEFINITELY : 
method
property
alpha
The Alpha factor in an exponential moving average with Alpha=1 - no filter. Alpha = 0 - infinite filter. Determines the extent of the temporal history that should be averaged.
method
property
delta
Step-size boundary. Establishes the threshold used to preserve surfaces (edges). If the disparity value between neighboring pixels exceed the disparity threshold set by this delta parameter, then filtering will be temporarily disabled. Default value 0 means auto: 3 disparity integer levels. In case of subpixel mode it's 3*number of subpixel levels.
method
property
enable
Whether to enable or disable the filter.
method
property
persistencyMode
Persistency mode. If the current disparity/depth value is invalid, it will be replaced by an older value, based on persistency mode.
method
class

depthai.StereoDepthConfig.PostProcessing.TemporalFilter.PersistencyMode

class

depthai.StereoDepthConfig.PostProcessing.ThresholdFilter

method
property
maxRange
Maximum range in depth units. Depth values over this value are invalidated.
method
property
minRange
Minimum range in depth units. Depth values under this value are invalidated.
method
class

depthai.StereoDepthProperties

class
variable
property
alphaScaling
Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). On some high distortion lenses, and/or due to rectification (image rotated) invalid areas may appear even with alpha=0, in these cases alpha < 0.0 helps removing invalid areas. See getOptimalNewCameraMatrix from opencv for more details.
method
property
depthAlignCamera
Which camera to align disparity/depth to. When configured (not AUTO), takes precedence over 'depthAlign'
method
property
depthAlignmentUseSpecTranslation
Use baseline information for depth alignment from specs (design data) or from calibration. Suitable for debugging. Utilizes calibrated value as default
method
property
disparityToDepthUseSpecTranslation
Use baseline information for disparity to depth conversion from specs (design data) or from calibration. Suitable for debugging. Utilizes calibrated value as default
method
property
enableRectification
Enable stereo rectification/dewarp or not. Useful to disable when replaying pre- recorded rectified frames.
method
property
enableRuntimeStereoModeSwitch
Whether to enable switching stereo modes at runtime or not. E.g. standard to subpixel, standard+LR-check to subpixel + LR-check. Note: It will allocate resources for worst cases scenario, should be enabled only if dynamic mode switch is required. Default value: false.
method
property
focalLength
Override focal length from calibration. Used only in disparity to depth conversion. Units are pixels.
method
property
focalLengthFromCalibration
Whether to use horizontal focal length from calibration intrinsics (fx) or calculate based on calibration FOV. Default value is true. If set to false it's calculated from FOV and image resolution: focalLength = calib.width / (2.f * tan(calib.fov / 2 / 180.f * pi));
method
property
height
Input frame height. Optional (taken from MonoCamera nodes if they exist)
method
property
initialConfig
Initial stereo config
method
property
mesh
Specify a direct warp mesh to be used for rectification, instead of intrinsics + extrinsic matrices
method
property
numFramesPool
Num frames in output pool
method
property
numPostProcessingMemorySlices
Number of memory slices reserved for stereo depth post processing. -1 means auto, memory will be allocated based on initial stereo settings and number of shaves. 0 means that it will reuse the memory slices assigned for main stereo algorithm. For optimal performance it's recommended to allocate more than 0, so post processing will run in parallel with main stereo algorithm. Minimum 1, maximum 6.
method
property
numPostProcessingShaves
Number of shaves reserved for stereo depth post processing. Post processing can use multiple shaves to increase performance. -1 means auto, resources will be allocated based on enabled filters. 0 means that it will reuse the shave assigned for main stereo algorithm. For optimal performance it's recommended to allocate more than 0, so post processing will run in parallel with main stereo algorithm. Minimum 1, maximum 10.
method
property
outHeight
Output disparity/depth height. Currently only used when aligning to RGB
method
property
outKeepAspectRatio
Whether to keep aspect ratio of the input (rectified) or not
method
property
outWidth
Output disparity/depth width. Currently only used when aligning to RGB
method
property
rectificationUseSpecTranslation
Obtain rectification matrices using spec translation (design data) or from calibration in calculations. Suitable for debugging. Default: false
method
property
rectifyEdgeFillColor
Fill color for missing data at frame edges - grayscale 0..255, or -1 to replicate pixels
method
property
useHomographyRectification
Use 3x3 homography matrix for stereo rectification instead of sparse mesh generated on device. Default behaviour is AUTO, for lenses with FOV over 85 degrees sparse mesh is used, otherwise 3x3 homography. If custom mesh data is provided through loadMeshData or loadMeshFiles this option is ignored. true: 3x3 homography matrix generated from calibration data is used for stereo rectification, can't correct lens distortion. false: sparse mesh is generated on-device from calibration data with mesh step specified with setMeshStep (Default: (16, 16)), can correct lens distortion. Implementation for generating the mesh is same as opencv's initUndistortRectifyMap function. Only the first 8 distortion coefficients are used from calibration data.
method
property
width
Input frame width. Optional (taken from MonoCamera nodes if they exist)
method
class

depthai.StereoDepthProperties.RectificationMesh

property
meshLeftUri
Uri which points to the mesh array for 'left' input rectification
method
property
meshRightUri
Uri which points to the mesh array for 'right' input rectification
method
property
meshSize
Mesh array size in bytes, for each of 'left' and 'right' (need to match)
method
property
stepHeight
Distance between mesh points, in the vertical direction
method
property
stepWidth
Distance between mesh points, in the horizontal direction
method
class

depthai.SyncProperties

class

depthai.SystemInformationS3(depthai.Buffer)

class

depthai.SystemLoggerProperties

class

depthai.TensorInfo

class
DataType
Members:    FP16    U8F    INT    FP32    I8    FP64
class
StorageOrder
Members:    NHWC    NHCW    NCHW    HWC    CHW    WHC    HCW    WCH    CWH    NC    CN    C    H    W
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
method
method
class

depthai.TextAnnotation

class

depthai.ThermalAmbientParams

method
property
atmosphericTemperature
Atmospheric temperature. unit:K, range:230-500(high gain), 230-900(low gain)
method
property
atmosphericTransmittance
Atmospheric transmittance. unit:1/128, range:1-128(0.01-1)
method
property
distance
Distance to the measured object. unit:cnt(128cnt=1m), range:0-25600(0-200m)
method
property
gainMode
Gain mode, low or high.
method
property
reflectionTemperature
Reflection temperature. unit:K, range:230-500(high gain), 230-900(low gain)
method
property
targetEmissivity
Emissivity. unit:1/128, range:1-128(0.01-1)
method
class

depthai.ThermalConfig(depthai.Buffer)

method
method
property
ambientParams
Ambient factors that affect the temperature measurement of a Thermal sensor.
method
property
ffcParams
Parameters for Flat-Field-Correction.
method
property
imageParams
Image signal processing parameters on the sensor.
method
class

depthai.ThermalFFCParams

variable
variable
method
property
autoFFC
Auto Flat-Field-Correction. Controls wheather the shutter is controlled by the sensor module automatically or not.
method
property
autoFFCTempThreshold
Auto FFC trigger threshold. The condition for triggering the auto FFC is that the change of Vtemp value exceeds a certain threshold, which is called the Auto FFC trigger threshold.
method
property
closeManualShutter
Set this to True/False to close/open the shutter when autoFFC is disabled.
method
property
fallProtection
The shutter blade may open/close abnormally during strong mechanical shock (such as fall), and a monitoring process is designed in the firmware to correct the abnormal shutter switch in time. Turn on or off the fall protect mechanism.
method
property
maxFFCInterval
Maximum FFC interval when auto FFC is enabled. The time interval between two FFC should not be more than this value.
method
property
minFFCInterval
Minimum FFC interval when auto FFC is enabled. The time interval between two FFC should not be less than this value.
method
property
minShutterInterval
Frequent FFC will cause shutter heating, resulting in abnormal FFC effect and abnormal temperature measurement. Regardless of which mechanism triggers FFC, the minimum trigger interval must be limited.
method
class

depthai.ThermalImageParams

method
property
brightnessLevel
Image brightness level, 0-255.
method
property
contrastLevel
Image contrast level, 0-255.
method
property
digitalDetailEnhanceLevel
0-4 Digital etail enhance level.
method
property
orientation
Orientation of the image. Computed on the sensor.
method
property
spatialNoiseFilterLevel
0-3. Spatial noise filter level.
method
property
timeNoiseFilterLevel
0-3. Time noise filter level. Filters out the noise that appears over time.
method
class

depthai.ThermalProperties

property
boardSocket
Which socket will color camera use
method
property
fps
Camera sensor FPS
method
property
initialConfig
Initial Thermal config
method
property
numFramesPool
Num frames in output pool
method
class

depthai.ThreadedNode(depthai.Node)

method
method
method
method
getLogLevel(self) -> LogLevel: LogLevel
Gets the logging severity level for this node.  Returns:     Logging severity level
method
method
method
setLogLevel(self, arg0: LogLevel)
Sets the logging severity level for this node.  Parameter ``level``:     Logging severity level
method
method
class

depthai.ToFProperties

property
initialConfig
Initial ToF config
method
property
numFramesPool
Num frames in output pool
method
property
numShaves
Number of shaves reserved for ToF decoding.
method
property
warpHwIds
Warp HW IDs to use for undistortion, if empty, use auto/default
method
class

depthai.TrackedFeature

method
property
age
Feature age in frames
method
property
descriptor
Feature descriptor
method
property
harrisScore
Feature harris score
method
property
id
Feature ID. Persistent between frames if motion estimation is enabled.
method
property
position
x, y position of the detected feature
method
property
trackingError
Feature tracking error
method
class

depthai.TrackedFeatures(depthai.Buffer)

variable
method
method
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
class

depthai.Tracklets(depthai.Buffer)

method
method
method
getSequenceNum(self) -> int: int
Retrieves image sequence number
method
getTimestamp(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp related to dai::Clock::now()
method
getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
property
tracklets
Retrieve data for Tracklets.  Returns:     Vector of object tracker data, carrying tracking information.
method
class

depthai.VectorCircleAnnotation

method
__bool__(self) -> bool: bool
Check whether the list is nonempty
method
method
method
method
method
method
method
append(self, x: CircleAnnotation)
Add an item to the end of the list
method
clear(self)
Clear the contents
method
method
insert(self, i: int, x: CircleAnnotation)
Insert an item at a given position.
method
class

depthai.VectorColor

method
__bool__(self) -> bool: bool
Check whether the list is nonempty
method
method
method
method
method
method
method
append(self, x: Color)
Add an item to the end of the list
method
clear(self)
Clear the contents
method
method
insert(self, i: int, x: Color)
Insert an item at a given position.
method
class

depthai.VectorImgAnnotation

method
__bool__(self) -> bool: bool
Check whether the list is nonempty
method
method
method
method
method
method
method
append(self, x: ImgAnnotation)
Add an item to the end of the list
method
clear(self)
Clear the contents
method
method
insert(self, i: int, x: ImgAnnotation)
Insert an item at a given position.
method
class

depthai.VectorPoint2f

method
__bool__(self) -> bool: bool
Check whether the list is nonempty
method
method
method
method
method
method
method
append(self, x: Point2f)
Add an item to the end of the list
method
clear(self)
Clear the contents
method
method
insert(self, i: int, x: Point2f)
Insert an item at a given position.
method
class

depthai.VectorPointsAnnotation

method
__bool__(self) -> bool: bool
Check whether the list is nonempty
method
method
method
method
method
method
method
append(self, x: PointsAnnotation)
Add an item to the end of the list
method
clear(self)
Clear the contents
method
method
insert(self, i: int, x: PointsAnnotation)
Insert an item at a given position.
method
class

depthai.VectorTextAnnotation

method
__bool__(self) -> bool: bool
Check whether the list is nonempty
method
method
method
method
method
method
method
append(self, x: TextAnnotation)
Add an item to the end of the list
method
clear(self)
Clear the contents
method
method
insert(self, i: int, x: TextAnnotation)
Insert an item at a given position.
method
class

depthai.VideoEncoderProperties

class
Profile
Encoding profile, H264 (AVC), H265 (HEVC) or MJPEG  Members:    H264_BASELINE    H264_HIGH    H264_MAIN    H265_MAIN    MJPEG
class
RateControlMode
Rate control mode specifies if constant or variable bitrate should be used (H264 / H265)  Members:    CBR    VBR
variable
variable
variable
variable
variable
variable
variable
variable
variable
class

depthai.VioConfig

variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
variable
method