DepthAI v2 has been superseded by DepthAI v3. You are viewing legacy documentation.

ON THIS PAGE

    DepthAI Python API

    DepthAI Python API can be found on Github luxonis/depthai-python. Below is the reference documentation for the Python API.
    package

    depthai

    module
    class
    ADatatype
    Abstract message
    class
    AprilTag
    AprilTag structure.
    class
    AprilTagConfig
    AprilTagConfig message.
    class
    AprilTagProperties
    Specify properties for AprilTag
    class
    AprilTags
    AprilTags message.
    class
    Asset
    Asset is identified with string key and can store arbitrary binary data
    class
    AssetManager
    AssetManager can store assets and serialize
    class
    class
    Buffer
    Base message - buffer of binary data
    class
    CalibrationHandler
    CalibrationHandler is an interface to read/load/write structured calibration and device data. The following fields are protected and aren't allowed to be overridden by default: - boardName - boardRev - boardConf - hardwareConf - batchName - batchTime - boardOptions - productName
    class
    CameraBoardSocket
    Which Camera socket to use.  AUTO denotes that the decision will be made by device  Members:    AUTO    CAM_A    CAM_B    CAM_C    CAM_D    CAM_E    CAM_F    CAM_G    CAM_H    RGB : **Deprecated:** Use CAM_A or address camera by name instead    LEFT : **Deprecated:** Use CAM_B or address camera by name instead    RIGHT : **Deprecated:** Use CAM_C or address camera by name instead    CENTER : **Deprecated:** Use CAM_A or address camera by name instead
    class
    CameraControl
    CameraControl message. Specifies various camera control commands like:  - Still capture  - Auto/manual focus  - Auto/manual white balance  - Auto/manual exposure  - Anti banding  - ...  By default the camera enables 3A, with auto-focus in `CONTINUOUS_VIDEO` mode, auto-white-balance in `AUTO` mode, and auto-exposure with anti-banding for 50Hz mains frequency.
    class
    CameraExposureOffset
    Members:    START    MIDDLE    END
    class
    CameraFeatures
    CameraFeatures structure  Characterizes detected cameras on board
    class
    CameraImageOrientation
    Camera sensor image orientation / pixel readout. This exposes direct sensor settings. 90 or 270 degrees rotation is not available.  AUTO denotes that the decision will be made by device (e.g. on OAK-1/megaAI: ROTATE_180_DEG).  Members:    AUTO    NORMAL    HORIZONTAL_MIRROR    VERTICAL_FLIP    ROTATE_180_DEG
    class
    CameraInfo
    CameraInfo structure
    class
    CameraModel
    Which CameraModel to initialize the calibration with.  Members:    Perspective    Fisheye    Equirectangular    RadialDivision
    class
    CameraProperties
    Specify properties for ColorCamera such as camera ID, ...
    class
    CameraSensorConfig
    Sensor config
    class
    CameraSensorType
    Camera sensor type  Members:    COLOR    MONO    TOF    THERMAL
    class
    CastProperties
    Specify properties for Cast
    class
    ChipTemperature
    Chip temperature information.  Multiple temperature measurement points and their average
    class
    class
    ColorCameraProperties
    Specify properties for ColorCamera such as camera ID, ...
    class
    Colormap
    Camera sensor type  Members:    NONE    JET    TURBO    STEREO_JET    STEREO_TURBO
    class
    CpuUsage
    CpuUsage structure  Average usage in percent and time span of the average (since last query)
    class
    class
    DataInputQueue
    Access to send messages through XLink stream
    class
    DataOutputQueue
    Access to receive messages coming from XLink stream
    class
    DatatypeEnum
    Members:    Buffer    ImgFrame    EncodedFrame    NNData    ImageManipConfig    CameraControl    ImgDetections    SpatialImgDetections    SystemInformation    SpatialLocationCalculatorConfig    SpatialLocationCalculatorData    EdgeDetectorConfig    AprilTagConfig    AprilTags    Tracklets    IMUData    StereoDepthConfig    FeatureTrackerConfig    TrackedFeatures    PointCloudConfig    PointCloudData
    class
    DetectionNetworkProperties
    Specify properties for DetectionNetwork
    class
    DetectionNetworkType
    Members:    YOLO    MOBILENET
    class
    DetectionParserOptions
    DetectionParserOptions  Specifies how to parse output of detection networks
    class
    DetectionParserProperties
    Specify properties for DetectionParser
    class
    Device
    Represents the DepthAI device with the methods to interact with it. Implements the host-side queues to connect with XLinkIn and XLinkOut nodes
    class
    DeviceBase
    The core of depthai device for RAII, connects to device and maintains watchdog, timesync, ...
    class
    DeviceBootloader
    Represents the DepthAI bootloader with the methods to interact with it.
    class
    class
    DeviceInfo
    Describes a connected device
    class
    EdgeDetectorConfig
    EdgeDetectorConfig message. Carries sobel edge filter config.
    class
    EdgeDetectorConfigData
    EdgeDetectorConfigData configuration data structure
    class
    EdgeDetectorProperties
    Specify properties for EdgeDetector
    class
    EepromData
    EepromData structure  Contains the Calibration and Board data stored on device
    exception
    class
    class
    Extrinsics
    Extrinsics structure
    class
    FeatureTrackerConfig
    FeatureTrackerConfig message. Carries config for feature tracking algorithm
    class
    FeatureTrackerProperties
    Specify properties for FeatureTracker
    class
    FrameEvent
    Members:    NONE    READOUT_START    READOUT_END
    class
    GlobalProperties
    Specify properties which apply for whole pipeline
    class
    IMUData
    IMUData message. Carries normalized detection results
    class
    IMUPacket
    IMU output  Contains combined output for all possible modes. Only the enabled outputs are populated.
    class
    class
    class
    IMUReportAccelerometer
    Accelerometer  Units are [m/s^2]
    class
    IMUReportGyroscope
    Gyroscope  Units are [rad/s]
    class
    IMUReportMagneticField
    Magnetic field  Units are [uTesla]
    class
    IMUReportRotationVectorWAcc
    Rotation Vector with Accuracy  Contains quaternion components: i,j,k,real
    class
    IMUSensor
    Available IMU sensors. More details about each sensor can be found in the datasheet:  https://www.ceva-dsp.com/wp-content/uploads/2019/10/BNO080_085-Datasheet.pdf  Members:    ACCELEROMETER_RAW : Section 2.1.1  Acceleration of the device without any postprocessing, straight from the sensor. Units are [m/s^2]    ACCELEROMETER : Section 2.1.1  Acceleration of the device including gravity. Units are [m/s^2]    LINEAR_ACCELERATION : Section 2.1.1  Acceleration of the device with gravity removed. Units are [m/s^2]    GRAVITY : Section 2.1.1  Gravity. Units are [m/s^2]    GYROSCOPE_RAW : Section 2.1.2  The angular velocity of the device without any postprocessing, straight from the sensor. Units are [rad/s]    GYROSCOPE_CALIBRATED : Section 2.1.2  The angular velocity of the device. Units are [rad/s]    GYROSCOPE_UNCALIBRATED : Section 2.1.2  Angular velocity without bias compensation. Units are [rad/s]    MAGNETOMETER_RAW : Section 2.1.3  Magnetic field measurement without any postprocessing, straight from the sensor. Units are [uTesla]    MAGNETOMETER_CALIBRATED : Section 2.1.3  The fully calibrated magnetic field measurement. Units are [uTesla]    MAGNETOMETER_UNCALIBRATED : Section 2.1.3  The magnetic field measurement without hard-iron offset applied. Units are [uTesla]    ROTATION_VECTOR : Section 2.2  The rotation vector provides an orientation output that is expressed as a quaternion referenced to magnetic north and gravity. It is produced by fusing the outputs of the accelerometer, gyroscope and magnetometer. The rotation vector is the most accurate orientation estimate available. The magnetometer provides correction in yaw to reduce drift and the gyroscope enables the most responsive performance.    GAME_ROTATION_VECTOR : Section 2.2  The game rotation vector is an orientation output that is expressed as a quaternion with no specific reference for heading, while roll and pitch are referenced against gravity. It is produced by fusing the outputs of the accelerometer and the gyroscope (i.e. no magnetometer). The game rotation vector does not use the magnetometer to correct the gyroscopes drift in yaw. This is a deliberate omission (as specified by Google) to allow gaming applications to use a smoother representation of the orientation without the jumps that an instantaneous correction provided by a magnetic field update could provide. Long term the output will likely drift in yaw due to the characteristics of gyroscopes, but this is seen as preferable for this output versus a corrected output.    GEOMAGNETIC_ROTATION_VECTOR : Section 2.2  The geomagnetic rotation vector is an orientation output that is expressed as a quaternion referenced to magnetic north and gravity. It is produced by fusing the outputs of the accelerometer and magnetometer. The gyroscope is specifically excluded in order to produce a rotation vector output using less power than is required to produce the rotation vector of section 2.2.4. The consequences of removing the gyroscope are: Less responsive output since the highly dynamic outputs of the gyroscope are not used More errors in the presence of varying magnetic fields.    ARVR_STABILIZED_ROTATION_VECTOR : Section 2.2  Estimates of the magnetic field and the roll/pitch of the device can create a potential correction in the rotation vector produced. For applications (typically augmented or virtual reality applications) where a sudden jump can be disturbing, the output is adjusted to prevent these jumps in a manner that takes account of the velocity of the sensor system.    ARVR_STABILIZED_GAME_ROTATION_VECTOR : Section 2.2  While the magnetometer is removed from the calculation of the game rotation vector, the accelerometer itself can create a potential correction in the rotation vector produced (i.e. the estimate of gravity changes). For applications (typically augmented or virtual reality applications) where a sudden jump can be disturbing, the output is adjusted to prevent these jumps in a manner that takes account of the velocity of the sensor system. This process is called AR/VR stabilization.
    class
    class
    ImageAlignConfig
    ImageAlignConfig message
    class
    ImageAlignProperties
    Specify properties for ImageAlign
    class
    ImageManipConfig
    ImageManipConfig message. Specifies image manipulation options like:  - Crop  - Resize  - Warp  - ...
    class
    ImgDetection
    ImgDetection structure
    class
    ImgDetections
    ImgDetections message. Carries normalized detection results
    class
    ImgFrame
    ImgFrame message. Carries image data and metadata.
    class
    Interpolation
    Interpolation type  Members:    BILINEAR    BICUBIC    NEAREST_NEIGHBOR    BYPASS    DEFAULT    DEFAULT_DISPARITY_DEPTH
    class
    LogLevel
    Members:    TRACE    DEBUG    INFO    WARN    ERR    CRITICAL    OFF
    class
    class
    MedianFilter
    Median filter config  Members:    MEDIAN_OFF    KERNEL_3x3    KERNEL_5x5    KERNEL_7x7
    class
    MemoryInfo
    MemoryInfo structure  Free, remaining and total memory stats
    class
    class
    MessageGroup
    MessageGroup message. Carries multiple messages in one.
    class
    MonoCameraProperties
    Specify properties for MonoCamera such as camera ID, ...
    class
    NNData
    NNData message. Carries tensors and their metadata
    class
    NeuralNetworkProperties
    Specify properties for NeuralNetwork such as blob path, ...
    class
    Node
    Abstract Node
    class
    ObjectTrackerConfig
    ObjectTrackerConfig message. Carries ROI (region of interest) and threshold for depth calculation
    class
    ObjectTrackerProperties
    Specify properties for ObjectTracker
    class
    OpenVINO
    Support for basic OpenVINO related actions like version identification of neural network blobs,...
    class
    class
    Point2f
    Point2f structure  x and y coordinates that define a 2D point.
    class
    Point3f
    Point3f structure  x,y,z coordinates that define a 3D point.
    class
    PointCloudConfig
    PointCloudConfig message. Carries ROI (region of interest) and threshold for depth calculation
    class
    PointCloudData
    PointCloudData message. Carries point cloud data.
    class
    PointCloudProperties
    Specify properties for PointCloud
    class
    ProcessorType
    Members:    LEON_CSS    LEON_MSS
    class
    class
    Properties
    Base Properties structure
    class
    RawAprilTagConfig
    RawAprilTags configuration structure
    class
    RawBuffer
    RawBuffer structure
    class
    RawCameraControl
    RawCameraControl structure
    class
    RawEdgeDetectorConfig
    RawEdgeDetectorConfig configuration structure
    class
    class
    RawFeatureTrackerConfig
    RawFeatureTrackerConfig configuration structure
    class
    class
    RawImageAlignConfig
    RawImageAlignConfig configuration structure
    class
    RawImageManipConfig
    RawImageManipConfig structure
    class
    RawImgDetections
    RawImgDetections structure
    class
    RawImgFrame
    RawImgFrame structure
    class
    class
    RawNNData
    RawNNData structure
    class
    RawObjectTrackerConfig
    RawObjectTrackerConfig configuration structure
    class
    RawPointCloudConfig
    RawPointCloudConfig configuration structure
    class
    class
    RawSpatialImgDetections
    RawSpatialImgDetections structure
    class
    RawStereoDepthConfig
    RawStereoDepthConfig configuration structure
    class
    RawSystemInformation
    System information of device  Memory usage, cpu usage and chip temperature
    class
    RawToFConfig
    RawToFConfig configuration structure
    class
    RawTrackedFeatures
    RawTrackedFeatures structure
    class
    RawTracklets
    RawTracklets structure
    class
    Rect
    Rect structure  x,y coordinates together with width and height that define a rectangle. Can be either normalized [0,1] or absolute representation.
    class
    RotatedRect
    RotatedRect structure
    class
    SPIInProperties
    Properties for SPIIn node
    class
    SPIOutProperties
    Specify properties for SPIOut node
    class
    ScriptProperties
    Specify ScriptProperties options such as script uri, script name, ...
    class
    SerializationType
    Members:    LIBNOP    JSON    JSON_MSGPACK
    class
    Size2f
    Size2f structure  width, height values define the size of the shape/frame
    class
    SpatialDetectionNetworkProperties
    Specify properties for SpatialDetectionNetwork
    class
    SpatialImgDetection
    SpatialImgDetection structure  Contains image detection results together with spatial location data.
    class
    SpatialImgDetections
    SpatialImgDetections message. Carries detection results together with spatial location data
    class
    SpatialLocationCalculatorAlgorithm
    SpatialLocationCalculatorAlgorithm configuration modes  Contains calculation method used to obtain spatial locations.  Members:    AVERAGE    MEAN    MIN    MAX    MODE    MEDIAN
    class
    SpatialLocationCalculatorConfig
    SpatialLocationCalculatorConfig message. Carries ROI (region of interest) and threshold for depth calculation
    class
    SpatialLocationCalculatorConfigData
    SpatialLocation configuration data structure
    class
    SpatialLocationCalculatorConfigThresholds
    SpatialLocation configuration thresholds structure  Contains configuration data for lower and upper threshold in depth units (millimeter by default) for ROI. Values outside of threshold range will be ignored when calculating spatial coordinates from depth map.
    class
    SpatialLocationCalculatorData
    SpatialLocationCalculatorData message. Carries spatial information (X,Y,Z) and their configuration parameters
    class
    SpatialLocationCalculatorProperties
    Specify properties for SpatialLocationCalculator
    class
    SpatialLocations
    SpatialLocations structure  Contains configuration data, average depth for the calculated ROI on depth map. Together with spatial coordinates: x,y,z relative to the center of depth map. Units are in depth units (millimeter by default).
    class
    StereoDepthConfig
    StereoDepthConfig message.
    class
    StereoDepthProperties
    Specify properties for StereoDepth
    class
    StereoPair
    Describes which camera sockets can be used for stereo and their baseline.
    class
    StereoRectification
    StereoRectification structure
    class
    SyncProperties
    Specify properties for Sync.
    class
    SystemInformation
    SystemInformation message. Carries memory usage, cpu usage and chip temperatures.
    class
    SystemLoggerProperties
    SystemLoggerProperties structure
    class
    TensorInfo
    TensorInfo structure
    class
    Timestamp
    Timestamp structure
    class
    ToFConfig
    ToFConfig message. Carries config for feature tracking algorithm
    class
    ToFProperties
    Specify properties for ToF
    class
    TrackedFeature
    TrackedFeature structure
    class
    TrackedFeatures
    TrackedFeatures message. Carries position (X, Y) of tracked features and their ID.
    class
    TrackerIdAssignmentPolicy
    Members:    UNIQUE_ID    SMALLEST_ID
    class
    TrackerType
    Members:    SHORT_TERM_KCF : Kernelized Correlation Filter tracking    SHORT_TERM_IMAGELESS : Short term tracking without using image data    ZERO_TERM_IMAGELESS : Ability to track the objects without accessing image data.    ZERO_TERM_COLOR_HISTOGRAM : Tracking using image data too.
    class
    Tracklet
    Tracklet structure  Contains tracklets from object tracker output.
    class
    Tracklets
    Tracklets message. Carries object tracking information.
    class
    UVCProperties
    Properties for UVC node
    class
    UsbSpeed
    Get USB Speed  Members:    UNKNOWN    LOW    FULL    HIGH    SUPER    SUPER_PLUS
    class
    Version
    Version structure
    class
    VideoEncoderProperties
    Specify properties for VideoEncoder such as profile, bitrate, ...
    class
    WarpProperties
    Specify properties for Warp
    class
    XLinkConnection
    Represents connection between host and device over XLink protocol
    class
    XLinkDeviceState
    Members:    X_LINK_ANY_STATE    X_LINK_BOOTED    X_LINK_UNBOOTED    X_LINK_BOOTLOADER    X_LINK_FLASH_BOOTED
    exception
    class
    XLinkError_t
    Members:    X_LINK_SUCCESS    X_LINK_ALREADY_OPEN    X_LINK_COMMUNICATION_NOT_OPEN    X_LINK_COMMUNICATION_FAIL    X_LINK_COMMUNICATION_UNKNOWN_ERROR    X_LINK_DEVICE_NOT_FOUND    X_LINK_TIMEOUT    X_LINK_ERROR    X_LINK_OUT_OF_MEMORY    X_LINK_INSUFFICIENT_PERMISSIONS    X_LINK_DEVICE_ALREADY_IN_USE    X_LINK_NOT_IMPLEMENTED    X_LINK_INIT_USB_ERROR    X_LINK_INIT_TCP_IP_ERROR    X_LINK_INIT_PCIE_ERROR
    class
    XLinkInProperties
    Specify properties for XLinkIn such as stream name, ...
    class
    XLinkOutProperties
    Specify properties for XLinkOut such as stream name, ...
    class
    XLinkPlatform
    Members:    X_LINK_ANY_PLATFORM    X_LINK_MYRIAD_2    X_LINK_MYRIAD_X
    class
    XLinkProtocol
    Members:    X_LINK_USB_VSC    X_LINK_USB_CDC    X_LINK_PCIE    X_LINK_TCP_IP    X_LINK_IPC    X_LINK_NMB_OF_PROTOCOLS    X_LINK_ANY_PROTOCOL
    exception
    exception
    class
    connectionInterface
    Members:    USB    ETHERNET    WIFI
    function
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    module

    depthai.node

    class
    AprilTag
    AprilTag node.
    class
    Camera
    Camera node. Experimental node, for both mono and color types of sensors
    class
    Cast
    Cast node.
    class
    ColorCamera
    ColorCamera node. For use with color sensors.
    class
    DetectionNetwork
    DetectionNetwork, base for different network specializations
    class
    DetectionParser
    DetectionParser node. Parses detection results from different neural networks and is being used internally by MobileNetDetectionNetwork and YoloDetectionNetwork.
    class
    EdgeDetector
    EdgeDetector node. Performs edge detection using 3x3 Sobel filter
    class
    FeatureTracker
    FeatureTracker node. Performs feature tracking and reidentification using motion estimation between 2 consecutive frames.
    class
    IMU
    IMU node for BNO08X.
    class
    ImageAlign
    ImageAlign node. Calculates spatial location data on a set of ROIs on depth map.
    class
    ImageManip
    ImageManip node. Capability to crop, resize, warp, ... incoming image frames
    class
    class
    MobileNetDetectionNetwork
    MobileNetDetectionNetwork node. Parses MobileNet results
    class
    MobileNetSpatialDetectionNetwork
    MobileNetSpatialDetectionNetwork node. Mobilenet-SSD based network with spatial location data.
    class
    MonoCamera
    MonoCamera node. For use with grayscale sensors.
    class
    NeuralNetwork
    NeuralNetwork node. Runs a neural inference on input data.
    class
    ObjectTracker
    ObjectTracker node. Performs object tracking using Kalman filter and hungarian algorithm.
    class
    PointCloud
    PointCloud node. Computes point cloud from depth frames.
    class
    SPIIn
    SPIIn node. Receives messages over SPI.
    class
    SPIOut
    SPIOut node. Sends messages over SPI.
    class
    class
    SpatialDetectionNetwork
    SpatialDetectionNetwork node. Runs a neural inference on input image and calculates spatial location data.
    class
    SpatialLocationCalculator
    SpatialLocationCalculator node. Calculates spatial location data on a set of ROIs on depth map.
    class
    StereoDepth
    StereoDepth node. Compute stereo disparity and depth from left-right image pair.
    class
    class
    SystemLogger
    SystemLogger node. Send system information periodically.
    class
    ToF
    ToF node
    class
    UVC
    UVC (USB Video Class) node
    class
    VideoEncoder
    VideoEncoder node. Encodes frames into MJPEG, H264 or H265.
    class
    Warp
    Warp node. Capability to crop, resize, warp, ... incoming image frames
    class
    XLinkIn
    XLinkIn node. Receives messages over XLink.
    class
    XLinkOut
    XLinkOut node. Sends messages over XLink.
    class
    YoloDetectionNetwork
    YoloDetectionNetwork node. Parses Yolo results
    class
    YoloSpatialDetectionNetwork
    YoloSpatialDetectionNetwork node. Yolo-based network with spatial location data.
    class

    depthai.node.AprilTag(depthai.Node)

    method
    setWaitForConfigInput(self, wait: bool)
    Specify whether or not wait until configuration message arrives to inputConfig Input.  Parameter ``wait``:     True to wait for configuration message, false otherwise.
    property
    initialConfig
    Initial config to use when calculating spatial location data.
    property
    inputConfig
    Input AprilTagConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
    property
    inputImage
    Input message with depth data used to retrieve spatial information about detected object. Default queue is non-blocking with size 4.
    property
    out
    Outputs AprilTags message that carries spatial location results.
    property
    passthroughInputImage
    Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
    class

    depthai.node.Camera(depthai.Node)

    method
    getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocket
    Retrieves which board socket to use  Returns:     Board socket to use
    method
    getCalibrationAlpha(self) -> float|None: float|None
    Get calibration alpha parameter that determines FOV of undistorted frames
    method
    getCamera(self) -> str: str
    Retrieves which camera to use by name  Returns:     Name of the camera to use
    method
    getFps(self) -> float: float
    Get rate at which camera should produce frames  Returns:     Rate in frames per second
    method
    getHeight(self) -> int: int
    Get sensor resolution height
    method
    method
    method
    getMeshStep(self) -> tuple[int, int]: tuple[int, int]
    Gets the distance between mesh points
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    getWidth(self) -> int: int
    Get sensor resolution width
    method
    loadMeshData(self, warpMesh: typing_extensions.Buffer)
    Specify mesh calibration data for undistortion See `loadMeshFiles` for the expected data format
    method
    loadMeshFile(self, warpMesh: Path)
    Specify local filesystem paths to the undistort mesh calibration files.  When a mesh calibration is set, it overrides the camera intrinsics/extrinsics matrices. Overrides useHomographyRectification behavior. Mesh format: a sequence of (y,x) points as 'float' with coordinates from the input image to be mapped in the output. The mesh can be subsampled, configured by `setMeshStep`.  With a 1280x800 resolution and the default (16,16) step, the required mesh size is:  width: 1280 / 16 + 1 = 81  height: 800 / 16 + 1 = 51
    method
    setBoardSocket(self, boardSocket: depthai.CameraBoardSocket)
    Specify which board socket to use  Parameter ``boardSocket``:     Board socket to use
    method
    setCalibrationAlpha(self, alpha: typing.SupportsFloat)
    Set calibration alpha parameter that determines FOV of undistorted frames
    method
    setCamera(self, name: str)
    Specify which camera to use by name  Parameter ``name``:     Name of the camera to use
    method
    setFps(self, fps: typing.SupportsFloat)
    Set rate at which camera should produce frames  Parameter ``fps``:     Rate in frames per second
    method
    method
    setIsp3aFps(self, isp3aFps: typing.SupportsInt)
    Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.
    method
    method
    setMeshStep(self, width: typing.SupportsInt, height: typing.SupportsInt)
    Set the distance between mesh points. Default: (32, 32)
    method
    method
    setRawOutputPacked(self, packed: bool)
    Configures whether the camera `raw` frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (`false`), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000'00pp'pppp'pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.
    method
    method
    method
    property
    frameEvent
    Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.  It's sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp
    property
    initialControl
    Initial control options to apply to sensor
    property
    inputConfig
    Input for ImageManipConfig message, which can modify crop parameters in runtime  Default queue is non-blocking with size 8
    property
    inputControl
    Input for CameraControl message, which can modify camera parameters in runtime  Default queue is blocking with size 8
    property
    isp
    Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data.  Generated by the ISP engine, and the source for the 'video', 'preview' and 'still' outputs
    property
    preview
    Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame data.  Suitable for use with NeuralNetwork node
    property
    raw
    Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.  Captured directly from the camera sensor, and the source for the 'isp' output.
    property
    still
    Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.  The message is sent only when a CameraControl message arrives to inputControl with captureStill command set.
    property
    video
    Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.  Suitable for use with VideoEncoder node
    class

    depthai.node.Cast(depthai.Node)

    method
    setNumFramesPool(self, arg0: typing.SupportsInt) -> Cast: Cast
    Set number of frames in pool  Parameter ``numFramesPool``:     Number of frames in pool
    method
    setOffset(self, arg0: typing.SupportsFloat) -> Cast: Cast
    Set offset  Parameter ``offset``:     Offset
    method
    setOutputFrameType(self, arg0: depthai.RawImgFrame.Type) -> Cast: Cast
    Set output frame type  Parameter ``outputType``:     Output frame type
    method
    property
    input
    Input NNData or ImgFrame message.
    property
    output
    Output ImgFrame message.
    property
    passthroughInput
    Passthrough input message.
    class

    depthai.node.ColorCamera(depthai.Node)

    method
    getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocket
    Retrieves which board socket to use  Returns:     Board socket to use
    method
    method
    getCamera(self) -> str: str
    Retrieves which camera to use by name  Returns:     Name of the camera to use
    method
    method
    getFp16(self) -> bool: bool
    Get fp16 (0..255) data of preview output frames
    method
    getFps(self) -> float: float
    Get rate at which camera should produce frames  Returns:     Rate in frames per second
    method
    method
    method
    getInterleaved(self) -> bool: bool
    Get planar or interleaved data of preview output frames
    method
    getIspHeight(self) -> int: int
    Get 'isp' output height
    method
    getIspNumFramesPool(self) -> int: int
    Get number of frames in isp pool
    method
    getIspSize(self) -> tuple[int, int]: tuple[int, int]
    Get 'isp' output resolution as size, after scaling
    method
    getIspWidth(self) -> int: int
    Get 'isp' output width
    method
    method
    getPreviewKeepAspectRatio(self) -> bool: bool
    See also:     setPreviewKeepAspectRatio  Returns:     Preview keep aspect ratio option
    method
    getPreviewNumFramesPool(self) -> int: int
    Get number of frames in preview pool
    method
    method
    method
    getRawNumFramesPool(self) -> int: int
    Get number of frames in raw pool
    method
    method
    getResolutionHeight(self) -> int: int
    Get sensor resolution height
    method
    method
    getResolutionWidth(self) -> int: int
    Get sensor resolution width
    method
    method
    getSensorCropX(self) -> float: float
    Get sensor top left x crop coordinate
    method
    getSensorCropY(self) -> float: float
    Get sensor top left y crop coordinate
    method
    method
    getStillNumFramesPool(self) -> int: int
    Get number of frames in still pool
    method
    method
    method
    method
    getVideoNumFramesPool(self) -> int: int
    Get number of frames in video pool
    method
    method
    method
    getWaitForConfigInput(self) -> bool: bool
    See also:     setWaitForConfigInput  Returns:     True if wait for inputConfig message, false otherwise
    method
    sensorCenterCrop(self)
    Specify sensor center crop. Resolution size / video size
    method
    setBoardSocket(self, boardSocket: depthai.CameraBoardSocket)
    Specify which board socket to use  Parameter ``boardSocket``:     Board socket to use
    method
    method
    setCamera(self, name: str)
    Specify which camera to use by name  Parameter ``name``:     Name of the camera to use
    method
    setColorOrder(self, colorOrder: depthai.ColorCameraProperties.ColorOrder)
    Set color order of preview output images. RGB or BGR
    method
    setFp16(self, fp16: bool)
    Set fp16 (0..255) data type of preview output frames
    method
    setFps(self, fps: typing.SupportsFloat)
    Set rate at which camera should produce frames  Parameter ``fps``:     Rate in frames per second
    method
    method
    method
    setInterleaved(self, interleaved: bool)
    Set planar or interleaved data of preview output frames
    method
    setIsp3aFps(self, isp3aFps: typing.SupportsInt)
    Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.
    method
    method
    method
    method
    setPreviewKeepAspectRatio(self, keep: bool)
    Specifies whether preview output should preserve aspect ratio, after downscaling from video size or not.  Parameter ``keep``:     If true, a larger crop region will be considered to still be able to create     the final image in the specified aspect ratio. Otherwise video size is     resized to fit preview size
    method
    method
    method
    method
    setRawOutputPacked(self, packed: bool)
    Configures whether the camera `raw` frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (`false`), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000'00pp'pppp'pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.
    method
    method
    setSensorCrop(self, x: typing.SupportsFloat, y: typing.SupportsFloat)
    Specifies the cropping that happens when converting ISP to video output. By default, video will be center cropped from the ISP output. Note that this doesn't actually do on-sensor cropping (and MIPI-stream only that region), but it does postprocessing on the ISP (on RVC).  Parameter ``x``:     Top left X coordinate  Parameter ``y``:     Top left Y coordinate
    method
    setStillNumFramesPool(self, arg0: typing.SupportsInt)
    Set number of frames in preview pool
    method
    method
    setVideoNumFramesPool(self, arg0: typing.SupportsInt)
    Set number of frames in preview pool
    method
    method
    setWaitForConfigInput(self, wait: bool)
    Specify to wait until inputConfig receives a configuration message, before sending out a frame.  Parameter ``wait``:     True to wait for inputConfig message, false otherwise
    property
    frameEvent
    Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.  It's sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp
    property
    initialControl
    Initial control options to apply to sensor
    property
    inputConfig
    Input for ImageManipConfig message, which can modify crop parameters in runtime  Default queue is non-blocking with size 8
    property
    inputControl
    Input for CameraControl message, which can modify camera parameters in runtime  Default queue is blocking with size 8
    property
    isp
    Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data.  Generated by the ISP engine, and the source for the 'video', 'preview' and 'still' outputs
    property
    preview
    Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame data.  Suitable for use with NeuralNetwork node
    property
    raw
    Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.  Captured directly from the camera sensor, and the source for the 'isp' output.
    property
    still
    Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.  The message is sent only when a CameraControl message arrives to inputControl with captureStill command set.
    property
    video
    Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.  Suitable for use with VideoEncoder node
    class

    depthai.node.DetectionNetwork(depthai.node.NeuralNetwork)

    method
    getConfidenceThreshold(self) -> float: float
    Retrieves threshold at which to filter the rest of the detections.  Returns:     Detection confidence
    method
    setConfidenceThreshold(self, thresh: typing.SupportsFloat)
    Specifies confidence threshold at which to filter the rest of the detections.  Parameter ``thresh``:     Detection confidence must be greater than specified threshold to be added to     the list
    property
    input
    Input message with data to be inferred upon Default queue is blocking with size 5
    property
    out
    Outputs ImgDetections message that carries parsed detection results. Overrides NeuralNetwork 'out' with ImgDetections output message type.
    property
    outNetwork
    Outputs unparsed inference results.
    property
    passthrough
    Passthrough message on which the inference was performed.  Suitable for when input queue is set to non-blocking behavior.
    class

    depthai.node.DetectionParser(depthai.Node)

    method
    method
    method
    getConfidenceThreshold(self) -> float: float
    Retrieves threshold at which to filter the rest of the detections.  Returns:     Detection confidence
    method
    method
    method
    method
    method
    getNumFramesPool(self) -> int: int
    Returns number of frames in pool
    method
    method
    method
    setBlob(self, blob: depthai.OpenVINO.Blob)
    Retrieves some input tensor information from the blob  Parameter ``blob``:     OpenVINO blob to retrieve the information from
    method
    setConfidenceThreshold(self, thresh: typing.SupportsFloat)
    Specifies confidence threshold at which to filter the rest of the detections.  Parameter ``thresh``:     Detection confidence must be greater than specified threshold to be added to     the list
    method
    method
    method
    method
    method
    setNumFramesPool(self, numFramesPool: typing.SupportsInt)
    Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
    property
    input
    Input NN results with detection data to parse Default queue is blocking with size 5
    property
    out
    Outputs image frame with detected edges
    class

    depthai.node.EdgeDetector(depthai.Node)

    method
    getWaitForConfigInput(self) -> bool: bool
    See also:     setWaitForConfigInput  Returns:     True if wait for inputConfig message, false otherwise
    method
    setMaxOutputFrameSize(self, arg0: typing.SupportsInt)
    Specify maximum size of output image.  Parameter ``maxFrameSize``:     Maximum frame size in bytes
    method
    setNumFramesPool(self, arg0: typing.SupportsInt)
    Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
    method
    setWaitForConfigInput(self, wait: bool)
    Specify whether or not wait until configuration message arrives to inputConfig Input.  Parameter ``wait``:     True to wait for configuration message, false otherwise.
    property
    initialConfig
    Initial config to use for edge detection.
    property
    inputConfig
    Input EdgeDetectorConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
    property
    inputImage
    Input image on which edge detection is performed. Default queue is non-blocking with size 4.
    property
    outputImage
    Outputs image frame with detected edges
    class

    depthai.node.FeatureTracker(depthai.Node)

    method
    getWaitForConfigInput(self) -> bool: bool
    See also:     setWaitForConfigInput  Returns:     True if wait for inputConfig message, false otherwise
    method
    setHardwareResources(self, numShaves: typing.SupportsInt, numMemorySlices: typing.SupportsInt)
    Specify allocated hardware resources for feature tracking. 2 shaves/memory slices are required for optical flow, 1 for corner detection only.  Parameter ``numShaves``:     Number of shaves. Maximum 2.  Parameter ``numMemorySlices``:     Number of memory slices. Maximum 2.
    method
    setWaitForConfigInput(self, wait: bool)
    Specify whether or not wait until configuration message arrives to inputConfig Input.  Parameter ``wait``:     True to wait for configuration message, false otherwise.
    property
    initialConfig
    Initial config to use for feature tracking.
    property
    inputConfig
    Input FeatureTrackerConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
    property
    inputImage
    Input message with frame data on which feature tracking is performed. Default queue is non-blocking with size 4.
    property
    outputFeatures
    Outputs TrackedFeatures message that carries tracked features results.
    property
    passthroughInputImage
    Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
    class

    depthai.node.IMU(depthai.Node)

    method
    method
    method
    getBatchReportThreshold(self) -> int: int
    Above this packet threshold data will be sent to host, if queue is not blocked
    method
    getMaxBatchReports(self) -> int: int
    Maximum number of IMU packets in a batch report
    method
    setBatchReportThreshold(self, batchReportThreshold: typing.SupportsInt)
    Above this packet threshold data will be sent to host, if queue is not blocked
    method
    setMaxBatchReports(self, maxBatchReports: typing.SupportsInt)
    Maximum number of IMU packets in a batch report
    property
    out
    Outputs IMUData message that carries IMU packets.
    class

    depthai.node.ImageAlign(depthai.Node)

    method
    method
    method
    method
    setOutKeepAspectRatio(self, arg0: bool) -> ImageAlign: ImageAlign
    Specify whether to keep aspect ratio when resizing
    method
    property
    initialConfig
    Initial config to use when calculating spatial location data.
    property
    input
    Input message. Default queue is non-blocking with size 4.
    property
    inputAlignTo
    Input align to message. Default queue is non-blocking with size 1.
    property
    inputConfig
    Input message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
    property
    outputAligned
    Outputs ImgFrame message that is aligned to inputAlignTo.
    property
    passthroughInput
    Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
    class

    depthai.node.ImageManip(depthai.Node)

    method
    getWaitForConfigInput(self) -> bool: bool
    See also:     setWaitForConfigInput  Returns:     True if wait for inputConfig message, false otherwise
    method
    method
    method
    method
    method
    method
    setMaxOutputFrameSize(self, arg0: typing.SupportsInt)
    Specify maximum size of output image.  Parameter ``maxFrameSize``:     Maximum frame size in bytes
    method
    setNumFramesPool(self, arg0: typing.SupportsInt)
    Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
    method
    method
    method
    setWaitForConfigInput(self, wait: bool)
    Specify whether or not wait until configuration message arrives to inputConfig Input.  Parameter ``wait``:     True to wait for configuration message, false otherwise.
    method
    property
    initialConfig
    Initial config to use when manipulating frames
    property
    inputConfig
    Input ImageManipConfig message with ability to modify parameters in runtime Default queue is blocking with size 8
    property
    inputImage
    Input image to be modified Default queue is blocking with size 8
    property
    out
    Outputs ImgFrame message that carries modified image.
    class

    depthai.node.MessageDemux(depthai.Node)

    method
    getProcessor(self) -> depthai.ProcessorType: depthai.ProcessorType
    Get on which processor the node should run  Returns:     Processor type - Leon CSS or Leon MSS
    method
    setProcessor(self, arg0: depthai.ProcessorType)
    Set on which processor the node should run  Parameter ``type``:     Processor type - Leon CSS or Leon MSS
    property
    input
    Input message of type MessageGroup
    property
    outputs
    A map of outputs, where keys are same as in the input MessageGroup
    class

    depthai.node.MonoCamera(depthai.Node)

    method
    getBoardSocket(self) -> depthai.CameraBoardSocket: depthai.CameraBoardSocket
    Retrieves which board socket to use  Returns:     Board socket to use
    method
    method
    getCamera(self) -> str: str
    Retrieves which camera to use by name  Returns:     Name of the camera to use
    method
    getFps(self) -> float: float
    Get rate at which camera should produce frames  Returns:     Rate in frames per second
    method
    method
    method
    getNumFramesPool(self) -> int: int
    Get number of frames in main (ISP output) pool
    method
    getRawNumFramesPool(self) -> int: int
    Get number of frames in raw pool
    method
    method
    getResolutionHeight(self) -> int: int
    Get sensor resolution height
    method
    method
    getResolutionWidth(self) -> int: int
    Get sensor resolution width
    method
    setBoardSocket(self, boardSocket: depthai.CameraBoardSocket)
    Specify which board socket to use  Parameter ``boardSocket``:     Board socket to use
    method
    method
    setCamera(self, name: str)
    Specify which camera to use by name  Parameter ``name``:     Name of the camera to use
    method
    setFps(self, fps: typing.SupportsFloat)
    Set rate at which camera should produce frames  Parameter ``fps``:     Rate in frames per second
    method
    method
    method
    setIsp3aFps(self, isp3aFps: typing.SupportsInt)
    Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.
    method
    setNumFramesPool(self, arg0: typing.SupportsInt)
    Set number of frames in main (ISP output) pool
    method
    method
    setRawOutputPacked(self, packed: bool)
    Configures whether the camera `raw` frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (`false`), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000'00pp'pppp'pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.
    method
    property
    frameEvent
    Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.  It's sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp
    property
    initialControl
    Initial control options to apply to sensor
    property
    inputControl
    Input for CameraControl message, which can modify camera parameters in runtime Default queue is blocking with size 8
    property
    out
    Outputs ImgFrame message that carries RAW8 encoded (grayscale) frame data.  Suitable for use StereoDepth node. Processed by ISP
    property
    raw
    Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.  Captured directly from the camera sensor
    class

    depthai.node.NeuralNetwork(depthai.Node)

    method
    getNumInferenceThreads(self) -> int: int
    How many inference threads will be used to run the network  Returns:     Number of threads, 0, 1 or 2. Zero means AUTO
    method
    method
    setBlobPath(self, path: Path)
    Load network blob into assets and use once pipeline is started.  Throws:     Error if file doesn't exist or isn't a valid network blob.  Parameter ``path``:     Path to network blob
    method
    setNumInferenceThreads(self, numThreads: typing.SupportsInt)
    How many threads should the node use to run the network.  Parameter ``numThreads``:     Number of threads to dedicate to this node
    method
    setNumNCEPerInferenceThread(self, numNCEPerThread: typing.SupportsInt)
    How many Neural Compute Engines should a single thread use for inference  Parameter ``numNCEPerThread``:     Number of NCE per thread
    method
    setNumPoolFrames(self, numFrames: typing.SupportsInt)
    Specifies how many frames will be available in the pool  Parameter ``numFrames``:     How many frames will pool have
    property
    input
    Input message with data to be inferred upon Default queue is blocking with size 5
    property
    inputs
    Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1 and waits for messages
    property
    out
    Outputs NNData message that carries inference results
    property
    passthrough
    Passthrough message on which the inference was performed.  Suitable for when input queue is set to non-blocking behavior.
    property
    passthroughs
    Passthroughs which correspond to specified input
    class

    depthai.node.ObjectTracker(depthai.Node)

    method
    setDetectionLabelsToTrack(self, labels: collections.abc.Sequence [ typing.SupportsInt ])
    Specify detection labels to track.  Parameter ``labels``:     Detection labels to track. Default every label is tracked from image     detection network output.
    method
    setMaxObjectsToTrack(self, maxObjectsToTrack: typing.SupportsInt)
    Specify maximum number of object to track.  Parameter ``maxObjectsToTrack``:     Maximum number of object to track. Maximum 60 in case of SHORT_TERM_KCF,     otherwise 1000.
    method
    setOcclusionRatioThreshold(self, occlusionRatioThreshold: typing.SupportsFloat)
    Specify occlusion ratio threshold.  Parameter ``occlusionRatioThreshold``:     Occlusion ratio threshold. Used to filter out overlapping tracklets. Default     0.4.
    method
    setTrackerIdAssignmentPolicy(self, type: depthai.TrackerIdAssignmentPolicy)
    Specify tracker ID assignment policy.  Parameter ``type``:     Tracker ID assignment policy.
    method
    setTrackerThreshold(self, threshold: typing.SupportsFloat)
    Specify tracker threshold.  Parameter ``threshold``:     Above this threshold the detected objects will be tracked. Default 0, all     image detections are tracked.
    method
    setTrackerType(self, type: depthai.TrackerType)
    Specify tracker type algorithm.  Parameter ``type``:     Tracker type.
    method
    setTrackingPerClass(self, trackingPerClass: bool)
    Whether tracker should take into consideration class label for tracking.
    method
    setTrackletBirthThreshold(self, threshold: typing.SupportsInt)
    Specify tracklet birth threshold.  Parameter ``threshold``:     Tracklet birth threshold. Minimum consecutive tracked frames required to     consider a tracklet as a new instance. Default 3.
    method
    setTrackletMaxLifespan(self, lifespan: typing.SupportsInt)
    Specify tracklet lifespan.  Parameter ``lifespan``:     Tracklet lifespan in number of frames. Number of frames after which a LOST     tracklet is removed. Default 120.
    property
    inputConfig
    Input ObjectTrackerConfig message with ability to modify parameters at runtime. Default queue is non-blocking with size 4.
    property
    inputDetectionFrame
    Input ImgFrame message on which object detection was performed. Default queue is non-blocking with size 4.
    property
    inputDetections
    Input message with image detection from neural network. Default queue is non- blocking with size 4.
    property
    inputTrackerFrame
    Input ImgFrame message on which tracking will be performed. RGBp, BGRp, NV12, YUV420p types are supported. Default queue is non-blocking with size 4.
    property
    out
    Outputs Tracklets message that carries object tracking results.
    property
    passthroughDetectionFrame
    Passthrough ImgFrame message on which object detection was performed. Suitable for when input queue is set to non-blocking behavior.
    property
    passthroughDetections
    Passthrough image detections message from neural network output. Suitable for when input queue is set to non-blocking behavior.
    property
    passthroughTrackerFrame
    Passthrough ImgFrame message on which tracking was performed. Suitable for when input queue is set to non-blocking behavior.
    class

    depthai.node.PointCloud(depthai.Node)

    method
    setNumFramesPool(self, arg0: typing.SupportsInt)
    Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
    property
    initialConfig
    Initial config to use when computing the point cloud.
    property
    inputConfig
    Input PointCloudConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
    property
    inputDepth
    Input message with depth data used to create the point cloud. Default queue is non-blocking with size 4.
    property
    outputPointCloud
    Outputs PointCloudData message
    property
    passthroughDepth
    Passthrough depth from which the point cloud was calculated. Suitable for when input queue is set to non-blocking behavior.
    class

    depthai.node.SPIIn(depthai.Node)

    method
    method
    getMaxDataSize(self) -> int: int
    Get maximum messages size in bytes
    method
    getNumFrames(self) -> int: int
    Get number of frames in pool
    method
    method
    setBusId(self, id: typing.SupportsInt)
    Specifies SPI Bus number to use  Parameter ``id``:     SPI Bus id
    method
    setMaxDataSize(self, maxDataSize: typing.SupportsInt)
    Set maximum message size it can receive  Parameter ``maxDataSize``:     Maximum size in bytes
    method
    setNumFrames(self, numFrames: typing.SupportsInt)
    Set number of frames in pool for sending messages forward  Parameter ``numFrames``:     Maximum number of frames in pool
    method
    setStreamName(self, name: str)
    Specifies stream name over which the node will receive data  Parameter ``name``:     Stream name
    property
    out
    Outputs message of same type as send from host.
    class

    depthai.node.SPIOut(depthai.Node)

    method
    setBusId(self, id: typing.SupportsInt)
    Specifies SPI Bus number to use  Parameter ``id``:     SPI Bus id
    method
    setStreamName(self, name: str)
    Specifies stream name over which the node will send data  Parameter ``name``:     Stream name
    property
    input
    Input for any type of messages to be transferred over SPI stream  Default queue is blocking with size 8
    class

    depthai.node.Script(depthai.Node)

    method
    getProcessor(self) -> depthai.ProcessorType: depthai.ProcessorType
    Get on which processor the script should run  Returns:     Processor type - Leon CSS or Leon MSS
    method
    getScriptName(self) -> str: str
    Get the script name in utf-8.  When name set with setScript() or setScriptPath(), returns that name. When script loaded with setScriptPath() with name not provided, returns the utf-8 string of that path. Otherwise, returns "<script>"  Returns:     std::string of script name in utf-8
    method
    setProcessor(self, arg0: depthai.ProcessorType)
    Set on which processor the script should run  Parameter ``type``:     Processor type - Leon CSS or Leon MSS
    method
    method
    property
    property
    class

    depthai.node.SpatialDetectionNetwork(depthai.node.DetectionNetwork)

    method
    setBoundingBoxScaleFactor(self, scaleFactor: typing.SupportsFloat)
    Specifies scale factor for detected bounding boxes.  Parameter ``scaleFactor``:     Scale factor must be in the interval (0,1].
    method
    setDepthLowerThreshold(self, lowerThreshold: typing.SupportsInt)
    Specifies lower threshold in depth units (millimeter by default) for depth values which will used to calculate spatial data  Parameter ``lowerThreshold``:     LowerThreshold must be in the interval [0,upperThreshold] and less than     upperThreshold.
    method
    setDepthUpperThreshold(self, upperThreshold: typing.SupportsInt)
    Specifies upper threshold in depth units (millimeter by default) for depth values which will used to calculate spatial data  Parameter ``upperThreshold``:     UpperThreshold must be in the interval (lowerThreshold,65535].
    method
    setSpatialCalculationAlgorithm(self, calculationAlgorithm: depthai.SpatialLocationCalculatorAlgorithm)
    Specifies spatial location calculator algorithm: Average/Min/Max  Parameter ``calculationAlgorithm``:     Calculation algorithm.
    method
    setSpatialCalculationStepSize(self, stepSize: typing.SupportsInt)
    Specifies spatial location calculator step size for depth calculation. Step size 1 means that every pixel is taken into calculation, size 2 means every second etc.  Parameter ``stepSize``:     Step size.
    property
    boundingBoxMapping
    Outputs mapping of detected bounding boxes relative to depth map  Suitable for when displaying remapped bounding boxes on depth frame
    property
    input
    Input message with data to be inferred upon Default queue is blocking with size 5
    property
    inputDepth
    Input message with depth data used to retrieve spatial information about detected object Default queue is non-blocking with size 4
    property
    out
    Outputs ImgDetections message that carries parsed detection results.
    property
    passthrough
    Passthrough message on which the inference was performed.  Suitable for when input queue is set to non-blocking behavior.
    property
    passthroughDepth
    Passthrough message for depth frame on which the spatial location calculation was performed.  Suitable for when input queue is set to non-blocking behavior.
    property
    spatialLocationCalculatorOutput
    Output of SpatialLocationCalculator node, which is used internally by SpatialDetectionNetwork. Suitable when extra information is required from SpatialLocationCalculator node, e.g. minimum, maximum distance.
    class

    depthai.node.SpatialLocationCalculator(depthai.Node)

    method
    getWaitForConfigInput(self) -> bool: bool
    See also:     setWaitForConfigInput  Returns:     True if wait for inputConfig message, false otherwise
    method
    setWaitForConfigInput(self, wait: bool)
    Specify whether or not wait until configuration message arrives to inputConfig Input.  Parameter ``wait``:     True to wait for configuration message, false otherwise.
    property
    initialConfig
    Initial config to use when calculating spatial location data.
    property
    inputConfig
    Input SpatialLocationCalculatorConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
    property
    inputDepth
    Input message with depth data used to retrieve spatial information about detected object. Default queue is non-blocking with size 4.
    property
    out
    Outputs SpatialLocationCalculatorData message that carries spatial location results.
    property
    passthroughDepth
    Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.
    class

    depthai.node.StereoDepth(depthai.Node)

    class
    PresetMode
    Preset modes for stereo depth.  Members:    HIGH_ACCURACY : **Deprecated:** Will be removed in future releases and replaced with DEFAULT    HIGH_DENSITY : **Deprecated:** Will be removed in future releases and replaced with DEFAULT    DEFAULT    FACE    HIGH_DETAIL    ROBOTICS
    method
    enableDistortionCorrection(self, arg0: bool)
    Equivalent to useHomographyRectification(!enableDistortionCorrection)
    method
    getMaxDisparity(self) -> float: float
    Useful for normalization of the disparity map.  Returns:     Maximum disparity value that the node can return
    method
    loadMeshData(self, dataLeft: collections.abc.Sequence [ typing.SupportsInt ], dataRight: collections.abc.Sequence [ typing.SupportsInt ])
    Specify mesh calibration data for 'left' and 'right' inputs, as vectors of bytes. Overrides useHomographyRectification behavior. See `loadMeshFiles` for the expected data format
    method
    loadMeshFiles(self, pathLeft: Path, pathRight: Path)
    Specify local filesystem paths to the mesh calibration files for 'left' and 'right' inputs.  When a mesh calibration is set, it overrides the camera intrinsics/extrinsics matrices. Overrides useHomographyRectification behavior. Mesh format: a sequence of (y,x) points as 'float' with coordinates from the input image to be mapped in the output. The mesh can be subsampled, configured by `setMeshStep`.  With a 1280x800 resolution and the default (16,16) step, the required mesh size is:  width: 1280 / 16 + 1 = 81  height: 800 / 16 + 1 = 51
    method
    setAlphaScaling(self, arg0: typing.SupportsFloat)
    Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). On some high distortion lenses, and/or due to rectification (image rotated) invalid areas may appear even with alpha=0, in these cases alpha < 0.0 helps removing invalid areas. See getOptimalNewCameraMatrix from opencv for more details.
    method
    setBaseline(self, arg0: typing.SupportsFloat)
    Override baseline from calibration. Used only in disparity to depth conversion. Units are centimeters.
    method
    setConfidenceThreshold(self, arg0: typing.SupportsInt)
    Confidence threshold for disparity calculation  Parameter ``confThr``:     Confidence threshold value 0..255
    method
    setDefaultProfilePreset(self, arg0: StereoDepth.PresetMode)
    Sets a default preset based on specified option.  Parameter ``mode``:     Stereo depth preset mode
    method
    method
    setDepthAlignmentUseSpecTranslation(self, arg0: bool)
    Use baseline information for depth alignment from specs (design data) or from calibration. Default: true
    method
    setDisparityToDepthUseSpecTranslation(self, arg0: bool)
    Use baseline information for disparity to depth conversion from specs (design data) or from calibration. Default: true
    method
    setEmptyCalibration(self)
    Specify that a passthrough/dummy calibration should be used, when input frames are already rectified (e.g. sourced from recordings on the host)
    method
    setExtendedDisparity(self, enable: bool)
    Disparity range increased from 0-95 to 0-190, combined from full resolution and downscaled images.  Suitable for short range objects. Currently incompatible with sub-pixel disparity
    method
    setFocalLength(self, arg0: typing.SupportsFloat)
    Override focal length from calibration. Used only in disparity to depth conversion. Units are pixels.
    method
    setFocalLengthFromCalibration(self, arg0: bool)
    Whether to use focal length from calibration intrinsics or calculate based on calibration FOV. Default value is true.
    method
    method
    setLeftRightCheck(self, enable: bool)
    Computes and combines disparities in both L-R and R-L directions, and combine them.  For better occlusion handling, discarding invalid disparity values
    method
    setMedianFilter(self, arg0: depthai.MedianFilter)
    Parameter ``median``:     Set kernel size for disparity/depth median filtering, or disable
    method
    setMeshStep(self, width: typing.SupportsInt, height: typing.SupportsInt)
    Set the distance between mesh points. Default: (16, 16)
    method
    setNumFramesPool(self, arg0: typing.SupportsInt)
    Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
    method
    method
    setOutputKeepAspectRatio(self, keep: bool)
    Specifies whether the frames resized by `setOutputSize` should preserve aspect ratio, with potential cropping when enabled. Default `true`
    method
    method
    setOutputSize(self, width: typing.SupportsInt, height: typing.SupportsInt)
    Specify disparity/depth output resolution size, implemented by scaling.  Currently only applicable when aligning to RGB camera
    method
    setPostProcessingHardwareResources(self, arg0: typing.SupportsInt, arg1: typing.SupportsInt)
    Specify allocated hardware resources for stereo depth. Suitable only to increase post processing runtime.  Parameter ``numShaves``:     Number of shaves.  Parameter ``numMemorySlices``:     Number of memory slices.
    method
    setRectification(self, enable: bool)
    Rectify input images or not.
    method
    setRectificationUseSpecTranslation(self, arg0: bool)
    Obtain rectification matrices using spec translation (design data) or from calibration in calculations. Should be used only for debugging. Default: false
    method
    setRectifyEdgeFillColor(self, color: typing.SupportsInt)
    Fill color for missing data at frame edges  Parameter ``color``:     Grayscale 0..255, or -1 to replicate pixels
    method
    setRectifyMirrorFrame(self, arg0: bool)
    DEPRECATED function. It was removed, since rectified images are not flipped anymore. Mirror rectified frames, only when LR-check mode is disabled. Default `true`. The mirroring is required to have a normal non-mirrored disparity/depth output.  A side effect of this option is disparity alignment to the perspective of left or right input: `false`: mapped to left and mirrored, `true`: mapped to right. With LR-check enabled, this option is ignored, none of the outputs are mirrored, and disparity is mapped to right.  Parameter ``enable``:     True for normal disparity/depth, otherwise mirrored
    method
    setRuntimeModeSwitch(self, arg0: bool)
    Enable runtime stereo mode switch, e.g. from standard to LR-check. Note: when enabled resources allocated for worst case to enable switching to any mode.
    method
    setSubpixel(self, enable: bool)
    Computes disparity with sub-pixel interpolation (3 fractional bits by default).  Suitable for long range. Currently incompatible with extended disparity
    method
    setSubpixelFractionalBits(self, subpixelFractionalBits: typing.SupportsInt)
    Number of fractional bits for subpixel mode. Default value: 3. Valid values: 3,4,5. Defines the number of fractional disparities: 2^x. Median filter postprocessing is supported only for 3 fractional bits.
    method
    useHomographyRectification(self, arg0: bool)
    Use 3x3 homography matrix for stereo rectification instead of sparse mesh generated on device. Default behaviour is AUTO, for lenses with FOV over 85 degrees sparse mesh is used, otherwise 3x3 homography. If custom mesh data is provided through loadMeshData or loadMeshFiles this option is ignored.  Parameter ``useHomographyRectification``:     true: 3x3 homography matrix generated from calibration data is used for     stereo rectification, can't correct lens distortion. false: sparse mesh is     generated on-device from calibration data with mesh step specified with     setMeshStep (Default: (16, 16)), can correct lens distortion. Implementation     for generating the mesh is same as opencv's initUndistortRectifyMap     function. Only the first 8 distortion coefficients are used from calibration     data.
    property
    confidenceMap
    Outputs ImgFrame message that carries RAW8 confidence map. Lower values means higher confidence of the calculated disparity value. RGB alignment, left-right check or any postproccessing (e.g. median filter) is not performed on confidence map.
    property
    debugDispCostDump
    Outputs ImgFrame message that carries cost dump of disparity map. Useful for debugging/fine tuning.
    property
    debugDispLrCheckIt1
    Outputs ImgFrame message that carries left-right check first iteration (before combining with second iteration) disparity map. Useful for debugging/fine tuning.
    property
    debugDispLrCheckIt2
    Outputs ImgFrame message that carries left-right check second iteration (before combining with first iteration) disparity map. Useful for debugging/fine tuning.
    property
    debugExtDispLrCheckIt1
    Outputs ImgFrame message that carries extended left-right check first iteration (downscaled frame, before combining with second iteration) disparity map. Useful for debugging/fine tuning.
    property
    debugExtDispLrCheckIt2
    Outputs ImgFrame message that carries extended left-right check second iteration (downscaled frame, before combining with first iteration) disparity map. Useful for debugging/fine tuning.
    property
    depth
    Outputs ImgFrame message that carries RAW16 encoded (0..65535) depth data in depth units (millimeter by default).  Non-determined / invalid depth values are set to 0
    property
    disparity
    Outputs ImgFrame message that carries RAW8 / RAW16 encoded disparity data: RAW8 encoded (0..95) for standard mode; RAW8 encoded (0..190) for extended disparity mode; RAW16 encoded for subpixel disparity mode: - 0..760 for 3 fractional bits (by default) - 0..1520 for 4 fractional bits - 0..3040 for 5 fractional bits
    property
    initialConfig
    Initial config to use for StereoDepth.
    property
    inputConfig
    Input StereoDepthConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
    property
    left
    Input for left ImgFrame of left-right pair  Default queue is non-blocking with size 8
    property
    outConfig
    Outputs StereoDepthConfig message that contains current stereo configuration.
    property
    rectifiedLeft
    Outputs ImgFrame message that carries RAW8 encoded (grayscale) rectified frame data.
    property
    rectifiedRight
    Outputs ImgFrame message that carries RAW8 encoded (grayscale) rectified frame data.
    property
    right
    Input for right ImgFrame of left-right pair  Default queue is non-blocking with size 8
    property
    syncedLeft
    Passthrough ImgFrame message from 'left' Input.
    property
    syncedRight
    Passthrough ImgFrame message from 'right' Input.
    class

    depthai.node.Sync(depthai.Node)

    method
    getProcessor(self) -> depthai.ProcessorType: depthai.ProcessorType
    Get on which processor the node should run  Returns:     Processor type - Leon CSS or Leon MSS
    method
    getSyncAttempts(self) -> int: int
    Gets the number of sync attempts
    method
    getSyncThreshold(self) -> datetime.timedelta: datetime.timedelta
    Gets the maximal interval between messages in the group in milliseconds
    method
    setProcessor(self, arg0: depthai.ProcessorType)
    Set on which processor the node should run  Parameter ``type``:     Processor type - Leon CSS or Leon MSS
    method
    setSyncAttempts(self, maxDataSize: typing.SupportsInt)
    Set the number of attempts to get the specified max interval between messages in the group  Parameter ``syncAttempts``:     Number of attempts to get the specified max interval between messages in the     group: - if syncAttempts = 0 then the node sends a message as soon at the     group is filled - if syncAttempts > 0 then the node will make syncAttemts     attempts to synchronize before sending out a message - if syncAttempts = -1     (default) then the node will only send a message if successfully     synchronized
    method
    setSyncThreshold(self, syncThreshold: datetime.timedelta)
    Set the maximal interval between messages in the group  Parameter ``syncThreshold``:     Maximal interval between messages in the group
    property
    inputs
    A map of inputs
    property
    out
    Output message of type MessageGroup
    class

    depthai.node.SystemLogger(depthai.Node)

    method
    getRate(self) -> float: float
    Gets logging rate, at which messages will be sent out
    method
    setRate(self, hz: typing.SupportsFloat)
    Specify logging rate, at which messages will be sent out  Parameter ``hz``:     Sending rate in hertz (messages per second)
    property
    out
    Outputs SystemInformation message that carries various system information like memory and CPU usage, temperatures, ...
    class

    depthai.node.ToF(depthai.Node)

    method
    setNumFramesPool(self, arg0: typing.SupportsInt) -> ToF: ToF
    Specify number of frames in output pool  Parameter ``numFramesPool``:     Number of frames in output pool
    method
    setNumShaves(self, arg0: typing.SupportsInt) -> ToF: ToF
    Specify number of shaves reserved for ToF decoding.
    property
    amplitude
    Outputs ImgFrame message that carries amplitude image.
    property
    depth
    Outputs ImgFrame message that carries decoded depth image.
    property
    initialConfig
    Initial config to use for depth calculation.
    property
    input
    Input raw ToF data. Default queue is blocking with size 8.
    property
    inputConfig
    Input ToF message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.
    property
    intensity
    Outputs ImgFrame message that carries intensity image.
    property
    phase
    Outputs ImgFrame message that carries phase image, useful for debugging. float32 type.
    class

    depthai.node.UVC(depthai.Node)

    method
    setGpiosOnInit(self, list: collections.abc.Mapping [ typing.SupportsInt , typing.SupportsInt ])
    Set GPIO list <gpio_number, value> for GPIOs to set (on/off) at init
    method
    setGpiosOnStreamOff(self, list: collections.abc.Mapping [ typing.SupportsInt , typing.SupportsInt ])
    Set GPIO list <gpio_number, value> for GPIOs to set when streaming is disabled
    method
    setGpiosOnStreamOn(self, list: collections.abc.Mapping [ typing.SupportsInt , typing.SupportsInt ])
    Set GPIO list <gpio_number, value> for GPIOs to set when streaming is enabled
    property
    input
    Input for image frames to be streamed over UVC  Default queue is blocking with size 8
    class

    depthai.node.VideoEncoder(depthai.Node)

    method
    method
    method
    method
    method
    method
    getLossless(self) -> bool: bool
    Get lossless mode. Applies only when using [M]JPEG profile.
    method
    method
    getNumBFrames(self) -> int: int
    Get number of B frames
    method
    getNumFramesPool(self) -> int: int
    Get number of frames in pool  Returns:     Number of pool frames
    method
    method
    method
    method
    method
    method
    setBitrate(self, bitrate: typing.SupportsInt)
    Set output bitrate in bps, for CBR rate control mode. 0 for auto (based on frame size and FPS). Applicable only to H264 and H265 profiles
    method
    setBitrateKbps(self, bitrateKbps: typing.SupportsInt)
    Set output bitrate in kbps, for CBR rate control mode. 0 for auto (based on frame size and FPS). Applicable only to H264 and H265 profiles
    method
    method
    setFrameRate(self, frameRate: typing.SupportsFloat)
    Sets expected frame rate  Parameter ``frameRate``:     Frame rate in frames per second
    method
    setKeyframeFrequency(self, freq: typing.SupportsInt)
    Set keyframe frequency. Every Nth frame a keyframe is inserted.  Applicable only to H264 and H265 profiles  Examples:  - 30 FPS video, keyframe frequency: 30. Every 1s a keyframe will be inserted  - 60 FPS video, keyframe frequency: 180. Every 3s a keyframe will be inserted
    method
    setLossless(self, arg0: bool)
    Set lossless mode. Applies only to [M]JPEG profile  Parameter ``lossless``:     True to enable lossless jpeg encoding, false otherwise
    method
    setMaxOutputFrameSize(self, maxFrameSize: typing.SupportsInt)
    Specifies maximum output encoded frame size
    method
    setNumBFrames(self, numBFrames: typing.SupportsInt)
    Set number of B frames to be inserted. Applicable only to H264 and H265 profiles
    method
    setNumFramesPool(self, frames: typing.SupportsInt)
    Set number of frames in pool  Parameter ``frames``:     Number of pool frames
    method
    method
    setQuality(self, quality: typing.SupportsInt)
    Set quality for [M]JPEG profile  Parameter ``quality``:     Value between 0-100%. Approximates quality
    method
    setRateControlMode(self, mode: depthai.VideoEncoderProperties.RateControlMode)
    Set rate control mode Applicable only to H264 and H265 profiles
    property
    bitstream
    Outputs ImgFrame message that carries BITSTREAM encoded (MJPEG, H264 or H265) frame data. Mutually exclusive with out.
    property
    input
    Input for NV12 ImgFrame to be encoded Default queue is blocking with size set by 'setNumFramesPool' (4).
    property
    out
    Outputs EncodedFrame message that carries encoded (MJPEG, H264 or H265) frame data. Mutually exclusive with bitstream.
    class

    depthai.node.Warp(depthai.Node)

    method
    getHwIds(self) -> list[int]: list[int]
    Retrieve which hardware warp engines to use
    method
    method
    setHwIds(self, arg0: collections.abc.Sequence [ typing.SupportsInt ])
    Specify which hardware warp engines to use  Parameter ``ids``:     Which warp engines to use (0, 1, 2)
    method
    setInterpolation(self, arg0: depthai.Interpolation)
    Specify which interpolation method to use  Parameter ``interpolation``:     type of interpolation
    method
    setMaxOutputFrameSize(self, arg0: typing.SupportsInt)
    Specify maximum size of output image.  Parameter ``maxFrameSize``:     Maximum frame size in bytes
    method
    setNumFramesPool(self, arg0: typing.SupportsInt)
    Specify number of frames in pool.  Parameter ``numFramesPool``:     How many frames should the pool have
    method
    method
    property
    inputImage
    Input image to be modified Default queue is blocking with size 8
    property
    out
    Outputs ImgFrame message that carries warped image.
    class

    depthai.node.XLinkIn(depthai.Node)

    method
    getMaxDataSize(self) -> int: int
    Get maximum messages size in bytes
    method
    getNumFrames(self) -> int: int
    Get number of frames in pool
    method
    method
    setMaxDataSize(self, maxDataSize: typing.SupportsInt)
    Set maximum message size it can receive  Parameter ``maxDataSize``:     Maximum size in bytes
    method
    setNumFrames(self, numFrames: typing.SupportsInt)
    Set number of frames in pool for sending messages forward  Parameter ``numFrames``:     Maximum number of frames in pool
    method
    setStreamName(self, streamName: str)
    Specifies XLink stream name to use.  The name should not start with double underscores '__', as those are reserved for internal use.  Parameter ``name``:     Stream name
    property
    out
    Outputs message of same type as send from host.
    class

    depthai.node.XLinkOut(depthai.Node)

    method
    getFpsLimit(self) -> float: float
    Get rate limit in messages per second
    method
    getMetadataOnly(self) -> bool: bool
    Get whether to transfer only messages attributes and not buffer data
    method
    method
    setFpsLimit(self, fpsLimit: typing.SupportsFloat)
    Specifies a message sending limit. It's approximated from specified rate.  Parameter ``fps``:     Approximate rate limit in messages per second
    method
    setMetadataOnly(self, arg0: bool)
    Specify whether to transfer only messages attributes and not buffer data
    method
    setStreamName(self, streamName: str)
    Specifies XLink stream name to use.  The name should not start with double underscores '__', as those are reserved for internal use.  Parameter ``name``:     Stream name
    property
    input
    Input for any type of messages to be transferred over XLink stream  Default queue is blocking with size 8
    class

    depthai.ADatatype

    class

    depthai.AprilTag

    method
    property
    bottomLeft
    The detected bottom left coordinates.
    method
    property
    bottomRight
    The detected bottom right coordinates.
    method
    property
    decisionMargin
    A measure of the quality of the binary decoding process; the average difference between the intensity of a data bit versus the decision threshold. Higher numbers roughly indicate better decodes. This is a reasonable measure of detection accuracy only for very small tags-- not effective for larger tags (where we could have sampled anywhere within a bit cell and still gotten a good detection.
    method
    property
    hamming
    How many error bits were corrected? Note: accepting large numbers of corrected errors leads to greatly increased false positive rates. As of this implementation, the detector cannot detect tags with a hamming distance greater than 2.
    method
    property
    id
    The decoded ID of the tag
    method
    property
    topLeft
    The detected top left coordinates.
    method
    property
    topRight
    The detected top right coordinates.
    method
    class

    depthai.AprilTagConfig(depthai.Buffer)

    class
    Family
    Supported AprilTag families.  Members:    TAG_36H11    TAG_36H10    TAG_25H9    TAG_16H5    TAG_CIR21H7    TAG_STAND41H12
    class
    QuadThresholds
    AprilTag quad threshold parameters.
    method
    method
    get(self) -> RawAprilTagConfig: RawAprilTagConfig
    Retrieve configuration data for AprilTag.  Returns:     config for stereo depth algorithm
    method
    set(self, arg0: RawAprilTagConfig) -> AprilTagConfig: AprilTagConfig
    Set explicit configuration.  Parameter ``config``:     Explicit configuration
    method
    class

    depthai.AprilTagConfig.QuadThresholds

    method
    property
    criticalDegree
    Reject quads where pairs of edges have angles that are close to straight or close to 180 degrees. Zero means that no quads are rejected. (In degrees).
    method
    property
    deglitch
    Should the thresholded image be deglitched? Only useful for very noisy images
    method
    property
    maxLineFitMse
    When fitting lines to the contours, what is the maximum mean squared error allowed? This is useful in rejecting contours that are far from being quad shaped; rejecting these quads "early" saves expensive decoding processing.
    method
    property
    maxNmaxima
    How many corner candidates to consider when segmenting a group of pixels into a quad.
    method
    property
    minClusterPixels
    Reject quads containing too few pixels.
    method
    property
    minWhiteBlackDiff
    When we build our model of black & white pixels, we add an extra check that the white model must be (overall) brighter than the black model. How much brighter? (in pixel values: [0,255]).
    method
    class

    depthai.AprilTagProperties

    variable
    property
    inputConfigSync
    Whether to wait for config at 'inputConfig' IO
    method
    class

    depthai.AprilTags(depthai.Buffer)

    method
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    getTimestamp(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp related to dai::Clock::now()
    method
    getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
    method
    method
    setTimestamp(self, arg0: datetime.timedelta) -> AprilTags: AprilTags
    Sets image timestamp related to dai::Clock::now()
    method
    property
    method
    class

    depthai.AssetManager

    method
    method
    addExisting(self, assets: collections.abc.Sequence [ Asset ])
    Adds all assets in an array to the AssetManager  Parameter ``assets``:     Vector of assets to add
    method
    method
    method
    remove(self, key: str)
    Removes asset with key  Parameter ``key``:     Key of asset to remove
    method
    method
    size(self) -> int: int
    Returns:     Number of asset stored in the AssetManager
    class

    depthai.BoardConfig

    class
    GPIO
    GPIO config
    class
    class
    class
    class
    Network
    Network configuration
    class
    UART
    UART instance config
    class
    class
    USB
    USB related config
    class
    UVC
    UVC configuration for USB descriptor
    class
    variable
    variable
    variable
    variable
    method
    property
    emmc
    eMMC config
    method
    property
    logDevicePrints
    log device prints
    method
    property
    logPath
    log path
    method
    property
    logSizeMax
    Max log size
    method
    property
    logVerbosity
    log verbosity
    method
    property
    mipi4LaneRgb
    MIPI 4Lane RGB config
    method
    property
    method
    property
    sysctl
    Optional list of FreeBSD sysctl parameters to be set (system, network, etc.). For example: "net.inet.tcp.delayed_ack=0" (this one is also set by default)
    method
    property
    uart
    UART instance map
    method
    property
    usb3PhyInternalClock
    USB3 phy config
    method
    property
    method
    property
    watchdogTimeoutMs
    Watchdog config
    method
    class

    depthai.BoardConfig.GPIO

    class
    Direction
    Members:    INPUT :     OUTPUT : 
    class
    Drive
    Drive strength in mA (2, 4, 8 and 12mA)  Members:    MA_2 :     MA_4 :     MA_8 :     MA_12 : 
    class
    Level
    Members:    LOW :     HIGH : 
    class
    Mode
    Members:    ALT_MODE_0 :     ALT_MODE_1 :     ALT_MODE_2 :     ALT_MODE_3 :     ALT_MODE_4 :     ALT_MODE_5 :     ALT_MODE_6 :     DIRECT : 
    class
    Pull
    Members:    NO_PULL :     PULL_UP :     PULL_DOWN :     BUS_KEEPER : 
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    class

    depthai.BoardConfig.Network

    class

    depthai.BoardConfig.UART

    class

    depthai.Buffer(depthai.ADatatype)

    class

    depthai.CalibrationHandler

    static method
    CalibrationHandler.fromJson(arg0: json) -> CalibrationHandler: CalibrationHandler
    Construct a new Calibration Handler object from JSON EepromData.  Parameter ``eepromDataJson``:     EepromData as JSON
    method
    method
    eepromToJson(self) -> json: json
    Get JSON representation of calibration data  Returns:     JSON structure
    method
    eepromToJsonFile(self, destPath: Path) -> bool: bool
    Write raw calibration/board data to json file.  Parameter ``destPath``:     Full path to the json file in which raw calibration data will be stored  Returns:     True on success, false otherwise
    method
    getBaselineDistance(self, cam1: CameraBoardSocket = ..., cam2: CameraBoardSocket = ..., useSpecTranslation: bool = True) -> float: float
    Get the baseline distance between two specified cameras. By default it will get the baseline between CameraBoardSocket.CAM_C and CameraBoardSocket.CAM_B.  Parameter ``cam1``:     First camera  Parameter ``cam2``:     Second camera  Parameter ``useSpecTranslation``:     Enabling this bool uses the translation information from the board design     data (not the calibration data)  Returns:     baseline distance in centimeters
    method
    getCameraExtrinsics(self, srcCamera: CameraBoardSocket, dstCamera: CameraBoardSocket, useSpecTranslation: bool = False) -> list[list[float]]: list[list[float]]
    Get the Camera Extrinsics object between two cameras from the calibration data if there is a linked connection between any two cameras then the relative rotation and translation (in centimeters) is returned by this function.  Parameter ``srcCamera``:     Camera Id of the camera which will be considered as origin.  Parameter ``dstCamera``:     Camera Id of the destination camera to which we are fetching the rotation     and translation from the SrcCamera  Parameter ``useSpecTranslation``:     Enabling this bool uses the translation information from the board design     data  Returns:     a transformationMatrix which is 4x4 in homogeneous coordinate system  Matrix representation of transformation matrix \f[ \text{Transformation Matrix} = \left [ \begin{matrix} r_{00} & r_{01} & r_{02} & T_x \\ r_{10} & r_{11} & r_{12} & T_y \\ r_{20} & r_{21} & r_{22} & T_z \\ 0 & 0 & 0 & 1 \end{matrix} \right ] \f]
    method
    method
    getCameraToImuExtrinsics(self, cameraId: CameraBoardSocket, useSpecTranslation: bool = False) -> list[list[float]]: list[list[float]]
    Get the Camera To Imu Extrinsics object From the data loaded if there is a linked connection between IMU and the given camera then there relative rotation and translation from the camera to IMU is returned.  Parameter ``cameraId``:     Camera Id of the camera which will be considered as origin. from which     Transformation matrix to the IMU will be found  Parameter ``useSpecTranslation``:     Enabling this bool uses the translation information from the board design     data  Returns:     Returns a transformationMatrix which is 4x4 in homogeneous coordinate system  Matrix representation of transformation matrix \f[ \text{Transformation Matrix} = \left [ \begin{matrix} r_{00} & r_{01} & r_{02} & T_x \\ r_{10} & r_{11} & r_{12} & T_y \\ r_{20} & r_{21} & r_{22} & T_z \\ 0 & 0 & 0 & 1 \end{matrix} \right ] \f]
    method
    getCameraTranslationVector(self, srcCamera: CameraBoardSocket, dstCamera: CameraBoardSocket, useSpecTranslation: bool = True) -> list[float]: list[float]
    Get the Camera translation vector between two cameras from the calibration data.  Parameter ``srcCamera``:     Camera Id of the camera which will be considered as origin.  Parameter ``dstCamera``:     Camera Id of the destination camera to which we are fetching the translation     vector from the SrcCamera  Parameter ``useSpecTranslation``:     Disabling this bool uses the translation information from the calibration     data (not the board design data)  Returns:     a translation vector like [x, y, z] in centimeters
    method
    getDefaultIntrinsics(self, cameraId: CameraBoardSocket) -> tuple[list[list[float]], int, int]: tuple[list[list[float]], int, int]
    Get the Default Intrinsics object  Parameter ``cameraId``:     Uses the cameraId to identify which camera intrinsics to return  Returns:     Represents the 3x3 intrinsics matrix of the respective camera along with     width and height at which it was calibrated.  Matrix representation of intrinsic matrix \f[ \text{Intrinsic Matrix} = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \f]
    method
    getDistortionCoefficients(self, cameraId: CameraBoardSocket) -> list[float]: list[float]
    Get the Distortion Coefficients object  Parameter ``cameraId``:     Uses the cameraId to identify which distortion Coefficients to return.  Returns:     the distortion coefficients of the requested camera in this order:     [k1,k2,p1,p2,k3,k4,k5,k6,s1,s2,s3,s4,tx,ty] for CameraModel::Perspective or     [k1, k2, k3, k4] for CameraModel::Fisheye see     https://docs.opencv.org/4.5.4/d9/d0c/group__calib3d.html for Perspective     model (Rational Polynomial Model) see     https://docs.opencv.org/4.5.4/db/d58/group__calib3d__fisheye.html for     Fisheye model
    method
    getDistortionModel(self, cameraId: CameraBoardSocket) -> CameraModel: CameraModel
    Get the distortion model of the given camera  Parameter ``cameraId``:     of the camera with lens position is requested.  Returns:     lens position of the camera with given cameraId at which it was calibrated.
    method
    getEepromData(self) -> EepromData: EepromData
    Get the Eeprom Data object  Returns:     EepromData object which contains the raw calibration data
    method
    getFov(self, cameraId: CameraBoardSocket, useSpec: bool = True) -> float: float
    Get the Fov of the camera  Parameter ``cameraId``:     of the camera of which we are fetching fov.  Parameter ``useSpec``:     Disabling this bool will calculate the fov based on intrinsics (focal     length, image width), instead of getting it from the camera specs  Returns:     field of view of the camera with given cameraId.
    method
    getImuToCameraExtrinsics(self, cameraId: CameraBoardSocket, useSpecTranslation: bool = False) -> list[list[float]]: list[list[float]]
    Get the Imu To Camera Extrinsics object from the data loaded if there is a linked connection between IMU and the given camera then there relative rotation and translation from the IMU to Camera is returned.  Parameter ``cameraId``:     Camera Id of the camera which will be considered as destination. To which     Transformation matrix from the IMU will be found.  Parameter ``useSpecTranslation``:     Enabling this bool uses the translation information from the board design     data  Returns:     Returns a transformationMatrix which is 4x4 in homogeneous coordinate system  Matrix representation of transformation matrix \f[ \text{Transformation Matrix} = \left [ \begin{matrix} r_{00} & r_{01} & r_{02} & T_x \\ r_{10} & r_{11} & r_{12} & T_y \\ r_{20} & r_{21} & r_{22} & T_z \\ 0 & 0 & 0 & 1 \end{matrix} \right ] \f]
    method
    getLensPosition(self, cameraId: CameraBoardSocket) -> int: int
    Get the lens position of the given camera  Parameter ``cameraId``:     of the camera with lens position is requested.  Returns:     lens position of the camera with given cameraId at which it was calibrated.
    method
    getStereoLeftCameraId(self) -> CameraBoardSocket: CameraBoardSocket
    Get the camera id of the camera which is used as left camera of the stereo setup  Returns:     cameraID of the camera used as left camera
    method
    getStereoLeftRectificationRotation(self) -> list[list[float]]: list[list[float]]
    Get the Stereo Left Rectification Rotation object  Returns:     returns a 3x3 rectification rotation matrix
    method
    getStereoRightCameraId(self) -> CameraBoardSocket: CameraBoardSocket
    Get the camera id of the camera which is used as right camera of the stereo setup  Returns:     cameraID of the camera used as right camera
    method
    getStereoRightRectificationRotation(self) -> list[list[float]]: list[list[float]]
    Get the Stereo Right Rectification Rotation object  Returns:     returns a 3x3 rectification rotation matrix
    method
    method
    setCameraExtrinsics(self, srcCameraId: CameraBoardSocket, destCameraId: CameraBoardSocket, rotationMatrix: collections.abc.Sequence [ collections.abc.Sequence [ typing.SupportsFloat ] ], translation: collections.abc.Sequence [ typing.SupportsFloat ], specTranslation: collections.abc.Sequence [ typing.SupportsFloat ] = [0.0, 0.0, 0.0])
    Set the Camera Extrinsics object  Parameter ``srcCameraId``:     Camera Id of the camera which will be considered as relative origin.  Parameter ``destCameraId``:     Camera Id of the camera which will be considered as destination from     srcCameraId.  Parameter ``rotationMatrix``:     Rotation between srcCameraId and destCameraId origins.  Parameter ``translation``:     Translation between srcCameraId and destCameraId origins.  Parameter ``specTranslation``:     Translation between srcCameraId and destCameraId origins from the design.
    method
    method
    setCameraType(self, cameraId: CameraBoardSocket, cameraModel: CameraModel)
    Set the Camera Type object  Parameter ``cameraId``:     CameraId of the camera for which cameraModel Type is being updated.  Parameter ``cameraModel``:     Type of the model the camera represents
    method
    setDeviceName(self, deviceName: str)
    Set the deviceName which responses to getDeviceName of Device  Parameter ``deviceName``:     Sets device name.
    method
    setDistortionCoefficients(self, cameraId: CameraBoardSocket, distortionCoefficients: collections.abc.Sequence [ typing.SupportsFloat ])
    Sets the distortion Coefficients obtained from camera calibration  Parameter ``cameraId``:     Camera Id of the camera for which distortion coefficients are computed  Parameter ``distortionCoefficients``:     Distortion Coefficients of the respective Camera.
    method
    setFov(self, cameraId: CameraBoardSocket, hfov: typing.SupportsFloat)
    Set the Fov of the Camera  Parameter ``cameraId``:     Camera Id of the camera  Parameter ``hfov``:     Horizontal fov of the camera from Camera Datasheet
    method
    setImuExtrinsics(self, destCameraId: CameraBoardSocket, rotationMatrix: collections.abc.Sequence [ collections.abc.Sequence [ typing.SupportsFloat ] ], translation: collections.abc.Sequence [ typing.SupportsFloat ], specTranslation: collections.abc.Sequence [ typing.SupportsFloat ] = [0.0, 0.0, 0.0])
    Set the Imu to Camera Extrinsics object  Parameter ``destCameraId``:     Camera Id of the camera which will be considered as destination from IMU.  Parameter ``rotationMatrix``:     Rotation between srcCameraId and destCameraId origins.  Parameter ``translation``:     Translation between IMU and destCameraId origins.  Parameter ``specTranslation``:     Translation between IMU and destCameraId origins from the design.
    method
    setLensPosition(self, cameraId: CameraBoardSocket, lensPosition: typing.SupportsInt)
    Sets the distortion Coefficients obtained from camera calibration  Parameter ``cameraId``:     Camera Id of the camera  Parameter ``lensPosition``:     lens posiotion value of the camera at the time of calibration
    method
    setProductName(self, productName: str)
    Set the productName which acts as alisas for users to identify the device  Parameter ``productName``:     Sets product name (alias).
    method
    setStereoLeft(self, cameraId: CameraBoardSocket, rectifiedRotation: collections.abc.Sequence [ collections.abc.Sequence [ typing.SupportsFloat ] ])
    Set the Stereo Left Rectification object  Parameter ``cameraId``:     CameraId of the camera which will be used as left Camera of stereo Setup  Parameter ``rectifiedRotation``:     Rectification rotation of the left camera required for feature matching  Homography of the Left Rectification = Intrinsics_right * rectifiedRotation * inv(Intrinsics_left)
    method
    setStereoRight(self, cameraId: CameraBoardSocket, rectifiedRotation: collections.abc.Sequence [ collections.abc.Sequence [ typing.SupportsFloat ] ])
    Set the Stereo Right Rectification object  Parameter ``cameraId``:     CameraId of the camera which will be used as left Camera of stereo Setup  Parameter ``rectifiedRotation``:     Rectification rotation of the left camera required for feature matching  Homography of the Right Rectification = Intrinsics_right * rectifiedRotation * inv(Intrinsics_right)
    class

    depthai.CameraControl(depthai.Buffer)

    class
    AntiBandingMode
    Members:    OFF    MAINS_50_HZ    MAINS_60_HZ    AUTO
    class
    AutoFocusMode
    Members:    OFF    AUTO    MACRO    CONTINUOUS_VIDEO    CONTINUOUS_PICTURE    EDOF
    class
    AutoWhiteBalanceMode
    Members:    OFF    AUTO    INCANDESCENT    FLUORESCENT    WARM_FLUORESCENT    DAYLIGHT    CLOUDY_DAYLIGHT    TWILIGHT    SHADE
    class
    CaptureIntent
    Members:    CUSTOM    PREVIEW    STILL_CAPTURE    VIDEO_RECORD    VIDEO_SNAPSHOT    ZERO_SHUTTER_LAG
    class
    Command
    Members:    START_STREAM    STOP_STREAM    STILL_CAPTURE    MOVE_LENS    AF_TRIGGER    AE_MANUAL    AE_AUTO    AWB_MODE    SCENE_MODE    ANTIBANDING_MODE    EXPOSURE_COMPENSATION    AE_LOCK    AE_TARGET_FPS_RANGE    AWB_LOCK    CAPTURE_INTENT    CONTROL_MODE    FRAME_DURATION    SENSITIVITY    EFFECT_MODE    AF_MODE    NOISE_REDUCTION_STRENGTH    SATURATION    BRIGHTNESS    STREAM_FORMAT    RESOLUTION    SHARPNESS    CUSTOM_USECASE    CUSTOM_CAPT_MODE    CUSTOM_EXP_BRACKETS    CUSTOM_CAPTURE    CONTRAST    AE_REGION    AF_REGION    LUMA_DENOISE    CHROMA_DENOISE    WB_COLOR_TEMP
    class
    ControlMode
    Members:    OFF    AUTO    USE_SCENE_MODE
    class
    EffectMode
    Members:    OFF    MONO    NEGATIVE    SOLARIZE    SEPIA    POSTERIZE    WHITEBOARD    BLACKBOARD    AQUA
    class
    FrameSyncMode
    Members:    OFF    OUTPUT    INPUT
    class
    SceneMode
    Members:    UNSUPPORTED    FACE_PRIORITY    ACTION    PORTRAIT    LANDSCAPE    NIGHT    NIGHT_PORTRAIT    THEATRE    BEACH    SNOW    SUNSET    STEADYPHOTO    FIREWORKS    SPORTS    PARTY    CANDLELIGHT    BARCODE
    method
    __init__(self)
    Construct CameraControl message
    method
    clearMiscControls(self)
    Clear the list of miscellaneous controls set by `setControl`
    method
    get(self) -> RawCameraControl: RawCameraControl
    Retrieve configuration data for CameraControl.  Returns:     config for CameraControl
    method
    getCaptureStill(self) -> bool: bool
    Check whether command to capture a still is set  Returns:     True if capture still command is set
    method
    method
    getLensPosition(self) -> int: int
    Retrieves lens position, range 0..255. Returns -1 if not available
    method
    getLensPositionRaw(self) -> float: float
    Retrieves lens position, range 0.0f..1.0f.
    method
    getMiscControls(self) -> list[tuple[str, str]]: list[tuple[str, str]]
    Get the list of miscellaneous controls set by `setControl`  Returns:     A list of <key, value> pairs as strings
    method
    getSensitivity(self) -> int: int
    Retrieves sensitivity, as an ISO value
    method
    set(self, config: RawCameraControl) -> CameraControl: CameraControl
    Set explicit configuration.  Parameter ``config``:     Explicit configuration
    method
    setAntiBandingMode(self, mode: RawCameraControl.AntiBandingMode) -> CameraControl: CameraControl
    Set a command to specify anti-banding mode. Anti-banding / anti-flicker works in auto-exposure mode, by controlling the exposure time to be applied in multiples of half the mains period, for example in multiple of 10ms for 50Hz (period 20ms) AC-powered illumination sources.  If the scene would be too bright for the smallest exposure step (10ms in the example, with ISO at a minimum of 100), anti-banding is not effective.  Parameter ``mode``:     Anti-banding mode to use. Default: `MAINS_50_HZ`
    method
    setAutoExposureCompensation(self, compensation: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to specify auto exposure compensation  Parameter ``compensation``:     Compensation value between -9..9, default 0
    method
    method
    method
    setAutoExposureLock(self, lock: bool) -> CameraControl: CameraControl
    Set a command to specify lock auto exposure  Parameter ``lock``:     Auto exposure lock mode enabled or disabled
    method
    setAutoExposureRegion(self, startX: typing.SupportsInt, startY: typing.SupportsInt, width: typing.SupportsInt, height: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to specify auto exposure region in pixels. Note: the region should be mapped to the configured sensor resolution, before ISP scaling  Parameter ``startX``:     X coordinate of top left corner of region  Parameter ``startY``:     Y coordinate of top left corner of region  Parameter ``width``:     Region width  Parameter ``height``:     Region height
    method
    setAutoFocusLensRange(self, infinityPosition: typing.SupportsInt, macroPosition: typing.SupportsInt) -> CameraControl: CameraControl
    Set autofocus lens range, `infinityPosition < macroPosition`, valid values `0..255`. May help to improve autofocus in case the lens adjustment is not typical/tuned
    method
    setAutoFocusMode(self, mode: RawCameraControl.AutoFocusMode) -> CameraControl: CameraControl
    Set a command to specify autofocus mode. Default `CONTINUOUS_VIDEO`
    method
    setAutoFocusRegion(self, startX: typing.SupportsInt, startY: typing.SupportsInt, width: typing.SupportsInt, height: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to specify focus region in pixels. Note: the region should be mapped to the configured sensor resolution, before ISP scaling  Parameter ``startX``:     X coordinate of top left corner of region  Parameter ``startY``:     Y coordinate of top left corner of region  Parameter ``width``:     Region width  Parameter ``height``:     Region height
    method
    method
    setAutoWhiteBalanceLock(self, lock: bool) -> CameraControl: CameraControl
    Set a command to specify auto white balance lock  Parameter ``lock``:     Auto white balance lock mode enabled or disabled
    method
    setAutoWhiteBalanceMode(self, mode: RawCameraControl.AutoWhiteBalanceMode) -> CameraControl: CameraControl
    Set a command to specify auto white balance mode  Parameter ``mode``:     Auto white balance mode to use. Default `AUTO`
    method
    setBrightness(self, value: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to adjust image brightness  Parameter ``value``:     Brightness, range -10..10, default 0
    method
    setCaptureIntent(self, mode: RawCameraControl.CaptureIntent) -> CameraControl: CameraControl
    Set a command to specify capture intent mode  Parameter ``mode``:     Capture intent mode
    method
    method
    setChromaDenoise(self, value: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to adjust chroma denoise amount  Parameter ``value``:     Chroma denoise amount, range 0..4, default 1
    method
    setContrast(self, value: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to adjust image contrast  Parameter ``value``:     Contrast, range -10..10, default 0
    method
    setControlMode(self, mode: RawCameraControl.ControlMode) -> CameraControl: CameraControl
    Set a command to specify control mode  Parameter ``mode``:     Control mode
    method
    setEffectMode(self, mode: RawCameraControl.EffectMode) -> CameraControl: CameraControl
    Set a command to specify effect mode  Parameter ``mode``:     Effect mode
    method
    setExternalTrigger(self, numFramesBurst: typing.SupportsInt, numFramesDiscard: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to enable external trigger snapshot mode  A rising edge on the sensor FSIN pin will make it capture a sequence of `numFramesBurst` frames. First `numFramesDiscard` will be skipped as configured (can be set to 0 as well), as they may have degraded quality
    method
    setFrameSyncMode(self, mode: RawCameraControl.FrameSyncMode) -> CameraControl: CameraControl
    Set the frame sync mode for continuous streaming operation mode, translating to how the camera pin FSIN/FSYNC is used: input/output/disabled
    method
    setLumaDenoise(self, value: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to adjust luma denoise amount  Parameter ``value``:     Luma denoise amount, range 0..4, default 1
    method
    method
    setManualFocus(self, lensPosition: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to specify manual focus position  Parameter ``lensPosition``:     specify lens position 0..255
    method
    setManualFocusRaw(self, lensPositionRaw: typing.SupportsFloat) -> CameraControl: CameraControl
    Set a command to specify manual focus position (more precise control).  Parameter ``lensPositionRaw``:     specify lens position 0.0f .. 1.0f  Returns:     CameraControl&
    method
    setManualWhiteBalance(self, colorTemperatureK: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to manually specify white-balance color correction  Parameter ``colorTemperatureK``:     Light source color temperature in kelvins, range 1000..12000
    method
    method
    setSaturation(self, value: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to adjust image saturation  Parameter ``value``:     Saturation, range -10..10, default 0
    method
    setSceneMode(self, mode: RawCameraControl.SceneMode) -> CameraControl: CameraControl
    Set a command to specify scene mode  Parameter ``mode``:     Scene mode
    method
    setSharpness(self, value: typing.SupportsInt) -> CameraControl: CameraControl
    Set a command to adjust image sharpness  Parameter ``value``:     Sharpness, range 0..4, default 1
    method
    method
    method
    method
    setStrobeExternal(self, gpioNumber: typing.SupportsInt, activeLevel: typing.SupportsInt) -> CameraControl: CameraControl
    Enable STROBE output driven by a MyriadX GPIO, optionally configuring the polarity This normally requires a FSIN/FSYNC/trigger input for MyriadX (usually GPIO 41), to generate timings
    method
    setStrobeSensor(self, activeLevel: typing.SupportsInt) -> CameraControl: CameraControl
    Enable STROBE output on sensor pin, optionally configuring the polarity. Note: for many sensors the polarity is high-active and not configurable
    class

    depthai.CameraControl.Command

    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    property
    property
    class

    depthai.CameraControl.SceneMode

    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    property
    property
    class

    depthai.CameraProperties

    class
    ColorOrder
    For 24 bit color these can be either RGB or BGR  Members:    BGR    RGB
    class
    WarpMeshSource
    Warp mesh source  Members:    AUTO    NONE    CALIBRATION    URI
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    class

    depthai.ColorCameraProperties

    class
    ColorOrder
    For 24 bit color these can be either RGB or BGR  Members:    BGR    RGB
    class
    SensorResolution
    Select the camera sensor resolution  Members:    THE_1080_P    THE_1200_P    THE_4_K    THE_5_MP    THE_12_MP    THE_4000X3000    THE_13_MP    THE_5312X6000    THE_48_MP    THE_720_P    THE_800_P    THE_1440X1080    THE_1352X1012    THE_2024X1520
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    class

    depthai.ColorCameraProperties.SensorResolution

    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    property
    property
    class

    depthai.DataInputQueue

    method
    close(self)
    Closes the queue and the underlying thread
    method
    getBlocking(self) -> bool: bool
    Gets current queue behavior when full (maxSize)  Returns:     True if blocking, false otherwise
    method
    getMaxSize(self) -> int: int
    Gets queue maximum size  Returns:     Maximum queue size
    method
    getName(self) -> str: str
    Gets queues name  Returns:     Queue name
    method
    isClosed(self) -> bool: bool
    Check whether queue is closed  .. warning::     This function is thread-unsafe and may return outdated incorrect values. It     is only meant for use in simple single-threaded code. Well written code     should handle exceptions when calling any DepthAI apis to handle hardware     events and multithreaded use.
    method
    method
    setBlocking(self, blocking: bool)
    Sets queue behavior when full (maxSize)  Parameter ``blocking``:     Specifies if block or overwrite the oldest message in the queue
    method
    setMaxSize(self, maxSize: typing.SupportsInt)
    Sets queue maximum size  Parameter ``maxSize``:     Specifies maximum number of messages in the queue
    class

    depthai.DataOutputQueue

    method
    method
    close(self)
    Closes the queue and the underlying thread
    method
    get(self) -> ADatatype: ADatatype
    Block until a message is available.  Returns:     Message or nullptr if no message available
    method
    getAll(self) -> list[ADatatype]: list[ADatatype]
    Block until at least one message in the queue. Then return all messages from the queue.  Returns:     Vector of messages
    method
    getBlocking(self) -> bool: bool
    Gets current queue behavior when full (maxSize)  Returns:     True if blocking, false otherwise
    method
    getMaxSize(self) -> int: int
    Gets queue maximum size  Returns:     Maximum queue size
    method
    getName(self) -> str: str
    Gets queues name  Returns:     Queue name
    method
    has(self) -> bool: bool
    Check whether front of the queue has a message (isn't empty)  Returns:     True if queue isn't empty, false otherwise
    method
    isClosed(self) -> bool: bool
    Check whether queue is closed  .. warning::     This function is thread-unsafe and may return outdated incorrect values. It     is only meant for use in simple single-threaded code. Well written code     should handle exceptions when calling any DepthAI apis to handle hardware     events and multithreaded use.
    method
    removeCallback(self, callbackId: typing.SupportsInt) -> bool: bool
    Removes a callback  Parameter ``callbackId``:     Id of callback to be removed  Returns:     True if callback was removed, false otherwise
    method
    setBlocking(self, blocking: bool)
    Sets queue behavior when full (maxSize)  Parameter ``blocking``:     Specifies if block or overwrite the oldest message in the queue
    method
    setMaxSize(self, maxSize: typing.SupportsInt)
    Sets queue maximum size  Parameter ``maxSize``:     Specifies maximum number of messages in the queue
    method
    tryGet(self) -> ADatatype: ADatatype
    Try to retrieve message from queue. If no message available, return immediately with nullptr  Returns:     Message or nullptr if no message available
    method
    tryGetAll(self) -> list[ADatatype]: list[ADatatype]
    Try to retrieve all messages in the queue.  Returns:     Vector of messages
    class

    depthai.DatatypeEnum

    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    property
    property
    class

    depthai.DetectionNetworkProperties(depthai.NeuralNetworkProperties)

    class

    depthai.DetectionParserProperties

    class

    depthai.Device(depthai.DeviceBase)

    class
    Config
    Device specific configuration
    method
    method
    method
    method
    getInputQueueNames(self) -> list[str]: list[str]
    Get all available input queue names  Returns:     Vector of input queue names
    method
    method
    getOutputQueueNames(self) -> list[str]: list[str]
    Get all available output queue names  Returns:     Vector of output queue names
    method
    method
    class

    depthai.DeviceBase

    static method
    DeviceBase.getAllAvailableDevices() -> list[DeviceInfo]: list[DeviceInfo]
    Returns all available devices  Returns:     Vector of available devices
    static method
    DeviceBase.getAllConnectedDevices() -> list[DeviceInfo]: list[DeviceInfo]
    Returns information of all connected devices. The devices could be both connectable as well as already connected to devices.  Returns:     Vector of connected device information
    static method
    static method
    DeviceBase.getDeviceByMxId(mxId: str) -> tuple[bool, DeviceInfo]: tuple[bool, DeviceInfo]
    Finds a device by MX ID. Example: 14442C10D13EABCE00  Parameter ``mxId``:     MyraidX ID which uniquely specifies a device  Returns:     Tuple of bool and DeviceInfo. Bool specifies if device was found. DeviceInfo     specifies the found device
    static method
    static method
    DeviceBase.getFirstAvailableDevice(skipInvalidDevices: bool = True) -> tuple[bool, DeviceInfo]: tuple[bool, DeviceInfo]
    Gets first available device. Device can be either in XLINK_UNBOOTED or XLINK_BOOTLOADER state  Returns:     Tuple of bool and DeviceInfo. Bool specifies if device was found. DeviceInfo     specifies the found device
    static method
    DeviceBase.getGlobalProfilingData() -> ProfilingData: ProfilingData
    Get current global accumulated profiling data  Returns:     ProfilingData from all devices
    method
    method
    method
    method
    addLogCallback(self, callback: collections.abc.Callable [ [ LogMessage ] , None ]) -> int: int
    Add a callback for device logging. The callback will be called from a separate thread with the LogMessage being passed.  Parameter ``callback``:     Callback to call whenever a log message arrives  Returns:     Id which can be used to later remove the callback
    method
    close(self)
    Closes the connection to device. Better alternative is the usage of context manager: `with depthai.Device(pipeline) as device:`
    method
    factoryResetCalibration(self)
    Factory reset EEPROM data if factory backup is available.  Throws:     std::runtime_exception If factory reset was unsuccessful
    method
    flashCalibration(self, calibrationDataHandler: CalibrationHandler) -> bool: bool
    Stores the Calibration and Device information to the Device EEPROM  Parameter ``calibrationObj``:     CalibrationHandler object which is loaded with calibration information.  Returns:     true on successful flash, false on failure
    method
    flashCalibration2(self, arg0: CalibrationHandler)
    Stores the Calibration and Device information to the Device EEPROM  Throws:     std::runtime_exception if failed to flash the calibration  Parameter ``calibrationObj``:     CalibrationHandler object which is loaded with calibration information.
    method
    flashEepromClear(self)
    Destructive action, deletes User area EEPROM contents Requires PROTECTED permissions  Throws:     std::runtime_exception if failed to flash the calibration  Returns:     True on successful flash, false on failure
    method
    flashFactoryCalibration(self, arg0: CalibrationHandler)
    Stores the Calibration and Device information to the Device EEPROM in Factory area To perform this action, correct env variable must be set  Throws:     std::runtime_exception if failed to flash the calibration  Returns:     True on successful flash, false on failure
    method
    flashFactoryEepromClear(self)
    Destructive action, deletes Factory area EEPROM contents Requires FACTORY PROTECTED permissions  Throws:     std::runtime_exception if failed to flash the calibration  Returns:     True on successful flash, false on failure
    method
    getAvailableStereoPairs(self) -> list[StereoPair]: list[StereoPair]
    Get stereo pairs taking into account the calibration and connected cameras.  @note This method will always return a subset of `getStereoPairs`.  Returns:     Vector of stereo pairs
    method
    getBootloaderVersion(self) -> depthai.Version|None: depthai.Version|None
    Gets Bootloader version if it was booted through Bootloader  Returns:     DeviceBootloader::Version if booted through Bootloader or none otherwise
    method
    getCalibration(self) -> CalibrationHandler: CalibrationHandler
    Retrieves the CalibrationHandler object containing the non-persistent calibration  Throws:     std::runtime_exception if failed to get the calibration  Returns:     The CalibrationHandler object containing the non-persistent calibration
    method
    getCameraSensorNames(self) -> dict[CameraBoardSocket, str]: dict[CameraBoardSocket, str]
    Get sensor names for cameras that are connected to the device  Returns:     Map/dictionary with camera sensor names, indexed by socket
    method
    getChipTemperature(self) -> ChipTemperature: ChipTemperature
    Retrieves current chip temperature as measured by device  Returns:     Temperature of various onboard sensors
    method
    getCmxMemoryUsage(self) -> MemoryInfo: MemoryInfo
    Retrieves current CMX memory information from device  Returns:     Used, remaining and total cmx memory
    method
    getConnectedCameraFeatures(self) -> list[CameraFeatures]: list[CameraFeatures]
    Get cameras that are connected to the device with their features/properties  Returns:     Vector of connected camera features
    method
    getConnectedCameras(self) -> list[CameraBoardSocket]: list[CameraBoardSocket]
    Get cameras that are connected to the device  Returns:     Vector of connected cameras
    method
    getConnectedIMU(self) -> str: str
    Get connected IMU type  Returns:     IMU type
    method
    getConnectionInterfaces(self) -> list[connectionInterface]: list[connectionInterface]
    Get connection interfaces for device  Returns:     Vector of connection type
    method
    method
    getDdrMemoryUsage(self) -> MemoryInfo: MemoryInfo
    Retrieves current DDR memory information from device  Returns:     Used, remaining and total ddr memory
    method
    getDeviceInfo(self) -> DeviceInfo: DeviceInfo
    Get the Device Info object o the device which is currently running  Returns:     DeviceInfo of the current device in execution
    method
    getDeviceName(self) -> typing.Any: typing.Any
    Get device name if available  Returns:     device name or empty string if not available
    method
    getEmbeddedIMUFirmwareVersion(self) -> Version: Version
    Get embedded IMU firmware version to which IMU can be upgraded  Returns:     Get embedded IMU firmware version to which IMU can be upgraded.
    method
    getIMUFirmwareUpdateStatus(self) -> tuple[bool, int]: tuple[bool, int]
    Get IMU firmware update status  Returns:     Whether IMU firmware update is done and last firmware update progress as     percentage. return value true and 100 means that the update was successful     return value true and other than 100 means that the update failed
    method
    getIMUFirmwareVersion(self) -> Version: Version
    Get connected IMU firmware version  Returns:     IMU firmware version
    method
    getIrDrivers(self) -> list[tuple[str, int, int]]: list[tuple[str, int, int]]
    Retrieves detected IR laser/LED drivers.  Returns:     Vector of tuples containing: driver name, I2C bus, I2C address. For OAK-D-     Pro it should be `[{"LM3644", 2, 0x63}]`
    method
    getLeonCssCpuUsage(self) -> CpuUsage: CpuUsage
    Retrieves average CSS Leon CPU usage  Returns:     Average CPU usage and sampling duration
    method
    getLeonCssHeapUsage(self) -> MemoryInfo: MemoryInfo
    Retrieves current CSS Leon CPU heap information from device  Returns:     Used, remaining and total heap memory
    method
    getLeonMssCpuUsage(self) -> CpuUsage: CpuUsage
    Retrieves average MSS Leon CPU usage  Returns:     Average CPU usage and sampling duration
    method
    getLeonMssHeapUsage(self) -> MemoryInfo: MemoryInfo
    Retrieves current MSS Leon CPU heap information from device  Returns:     Used, remaining and total heap memory
    method
    getLogLevel(self) -> LogLevel: LogLevel
    Gets current logging severity level of the device.  Returns:     Logging severity level
    method
    getLogOutputLevel(self) -> LogLevel: LogLevel
    Gets logging level which decides printing level to standard output.  Returns:     Standard output printing severity
    method
    getMxId(self) -> str: str
    Get MxId of device  Returns:     MxId of connected device
    method
    getProductName(self) -> typing.Any: typing.Any
    Get product name if available  Returns:     product name or empty string if not available
    method
    getProfilingData(self) -> ProfilingData: ProfilingData
    Get current accumulated profiling data  Returns:     ProfilingData from the specific device
    method
    getState(self) -> CrashDump: CrashDump
    Retrieves current device state in a crash dump format. It halts the device temporarily and might affect the running pipeline, it's best to close the device after this operation
    method
    getStereoPairs(self) -> list[StereoPair]: list[StereoPair]
    Get stereo pairs based on the device type.  Returns:     Vector of stereo pairs
    method
    getSystemInformationLoggingRate(self) -> float: float
    Gets current rate of system information logging ("info" severity) in Hz.  Returns:     Logging rate in Hz
    method
    getUsbSpeed(self) -> UsbSpeed: UsbSpeed
    Retrieves USB connection speed  Returns:     USB connection speed of connected device if applicable. Unknown otherwise.
    method
    getXLinkChunkSize(self) -> int: int
    Gets current XLink chunk size.  Returns:     XLink chunk size in bytes
    method
    hasCrashDump(self) -> bool: bool
    Retrieves whether the is crash dump stored on device or not.
    method
    isClosed(self) -> bool: bool
    Is the device already closed (or disconnected)  .. warning::     This function is thread-unsafe and may return outdated incorrect values. It     is only meant for use in simple single-threaded code. Well written code     should handle exceptions when calling any DepthAI apis to handle hardware     events and multithreaded use.
    method
    isEepromAvailable(self) -> bool: bool
    Check if EEPROM is available  Returns:     True if EEPROM is present on board, false otherwise
    method
    isPipelineRunning(self) -> bool: bool
    Checks if devices pipeline is already running  Returns:     True if running, false otherwise
    method
    readCalibration(self) -> CalibrationHandler: CalibrationHandler
    Fetches the EEPROM data from the device and loads it into CalibrationHandler object If no calibration is flashed, it returns default  Returns:     The CalibrationHandler object containing the calibration currently flashed     on device EEPROM
    method
    readCalibration2(self) -> CalibrationHandler: CalibrationHandler
    Fetches the EEPROM data from the device and loads it into CalibrationHandler object  Throws:     std::runtime_exception if no calibration is flashed  Returns:     The CalibrationHandler object containing the calibration currently flashed     on device EEPROM
    method
    readCalibrationOrDefault(self) -> CalibrationHandler: CalibrationHandler
    Fetches the EEPROM data from the device and loads it into CalibrationHandler object If no calibration is flashed, it returns default  Returns:     The CalibrationHandler object containing the calibration currently flashed     on device EEPROM
    method
    readCalibrationRaw(self) -> list[int]: list[int]
    Fetches the raw EEPROM data from User area  Throws:     std::runtime_exception if any error occurred  Returns:     Binary dump of User area EEPROM data
    method
    readFactoryCalibration(self) -> CalibrationHandler: CalibrationHandler
    Fetches the EEPROM data from Factory area and loads it into CalibrationHandler object  Throws:     std::runtime_exception if no calibration is flashed  Returns:     The CalibrationHandler object containing the calibration currently flashed     on device EEPROM in Factory Area
    method
    readFactoryCalibrationOrDefault(self) -> CalibrationHandler: CalibrationHandler
    Fetches the EEPROM data from Factory area and loads it into CalibrationHandler object If no calibration is flashed, it returns default  Returns:     The CalibrationHandler object containing the calibration currently flashed     on device EEPROM in Factory Area
    method
    readFactoryCalibrationRaw(self) -> list[int]: list[int]
    Fetches the raw EEPROM data from Factory area  Throws:     std::runtime_exception if any error occurred  Returns:     Binary dump of Factory area EEPROM data
    method
    removeLogCallback(self, callbackId: typing.SupportsInt) -> bool: bool
    Removes a callback  Parameter ``callbackId``:     Id of callback to be removed  Returns:     True if callback was removed, false otherwise
    method
    setCalibration(self, calibrationDataHandler: CalibrationHandler)
    Sets the Calibration at runtime. This is not persistent and will be lost after device reset.  Throws:     std::runtime_exception if failed to set the calibration  Parameter ``calibrationObj``:     CalibrationHandler object which is loaded with calibration information.
    method
    setIrFloodLightBrightness(self, mA: typing.SupportsFloat, mask: typing.SupportsInt = -1) -> bool: bool
    Sets the brightness of the IR Flood Light. Limits: up to 1500mA at 30% duty cycle. The duty cycle is controlled by the `left` camera STROBE, aligned to start of exposure. If the dot projector is also enabled, its lower duty cycle limits take precedence. The emitter is turned off by default  Parameter ``mA``:     Current in mA that will determine brightness, 0 or negative to turn off  Parameter ``mask``:     Optional mask to modify only Left (0x1) or Right (0x2) sides on OAK-D-Pro-W-     DEV  Returns:     True on success, false if not found or other failure
    method
    setIrFloodLightIntensity(self, intensity: typing.SupportsFloat, mask: typing.SupportsInt = -1) -> bool: bool
    Sets the intensity of the IR Flood Light. Limits: Intensity is directly normalized to 0 - 1500mA current. The duty cycle is 30% when exposure time is longer than 30% frame time. Otherwise, duty cycle is 100% of exposure time. The duty cycle is controlled by the `left` camera STROBE, aligned to start of exposure. The emitter is turned off by default  Parameter ``intensity``:     Intensity on range 0 to 1, that will determine brightness, 0 or negative to     turn off  Parameter ``mask``:     Optional mask to modify only Left (0x1) or Right (0x2) sides on OAK-D-Pro-W-     DEV  Returns:     True on success, false if not found or other failure
    method
    setIrLaserDotProjectorBrightness(self, mA: typing.SupportsFloat, mask: typing.SupportsInt = -1) -> bool: bool
    Sets the brightness of the IR Laser Dot Projector. Limits: up to 765mA at 30% duty cycle, up to 1200mA at 6% duty cycle. The duty cycle is controlled by `left` camera STROBE, aligned to start of exposure. The emitter is turned off by default  Parameter ``mA``:     Current in mA that will determine brightness, 0 or negative to turn off  Parameter ``mask``:     Optional mask to modify only Left (0x1) or Right (0x2) sides on OAK-D-Pro-W-     DEV  Returns:     True on success, false if not found or other failure
    method
    setIrLaserDotProjectorIntensity(self, intensity: typing.SupportsFloat, mask: typing.SupportsInt = -1) -> bool: bool
    Sets the intensity of the IR Laser Dot Projector. Limits: up to 765mA at 30% frame time duty cycle when exposure time is longer than 30% frame time. Otherwise, duty cycle is 100% of exposure time, with current increased up to max 1200mA to make up for shorter duty cycle. The duty cycle is controlled by `left` camera STROBE, aligned to start of exposure. The emitter is turned off by default  Parameter ``intensity``:     Intensity on range 0 to 1, that will determine brightness. 0 or negative to     turn off  Parameter ``mask``:     Optional mask to modify only Left (0x1) or Right (0x2) sides on OAK-D-Pro-W-     DEV  Returns:     True on success, false if not found or other failure
    method
    setLogLevel(self, level: LogLevel)
    Sets the devices logging severity level. This level affects which logs are transferred from device to host.  Parameter ``level``:     Logging severity
    method
    setLogOutputLevel(self, level: LogLevel)
    Sets logging level which decides printing level to standard output. If lower than setLogLevel, no messages will be printed  Parameter ``level``:     Standard output printing severity
    method
    setSystemInformationLoggingRate(self, rateHz: typing.SupportsFloat)
    Sets rate of system information logging ("info" severity). Default 1Hz If parameter is less or equal to zero, then system information logging will be disabled  Parameter ``rateHz``:     Logging rate in Hz
    method
    method
    setXLinkChunkSize(self, sizeBytes: typing.SupportsInt)
    Sets the chunk size for splitting device-sent XLink packets. A larger value could increase performance, and 0 disables chunking. A negative value is ignored. Device defaults are configured per protocol, currently 64*1024 for both USB and Ethernet.  Parameter ``sizeBytes``:     XLink chunk size in bytes
    method
    startIMUFirmwareUpdate(self, forceUpdate: bool = False) -> bool: bool
    Starts IMU firmware update asynchronously only if IMU node is not running. If current firmware version is the same as embedded firmware version then it's no- op. Can be overridden by forceUpdate parameter. State of firmware update can be monitored using getIMUFirmwareUpdateStatus API.  Parameter ``forceUpdate``:     Force firmware update or not. Will perform FW update regardless of current     version and embedded firmware version.  Returns:     Returns whether firmware update can be started. Returns false if IMU node is     started.
    method
    class

    depthai.DeviceBootloader

    class
    class
    class
    Memory
    Members:    AUTO    FLASH    EMMC
    class
    class
    class
    Section
    Members:    AUTO    HEADER    BOOTLOADER    BOOTLOADER_CONFIG    APPLICATION
    class
    Type
    Members:    AUTO    USB    NETWORK
    class
    static method
    static method
    DeviceBootloader.getAllAvailableDevices() -> list[DeviceInfo]: list[DeviceInfo]
    Searches for connected devices in either UNBOOTED or BOOTLOADER states.  Returns:     Vector of all found devices
    static method
    static method
    static method
    DeviceBootloader.getFirstAvailableDevice() -> tuple[bool, DeviceInfo]: tuple[bool, DeviceInfo]
    Searches for connected devices in either UNBOOTED or BOOTLOADER states and returns first available.  Returns:     Tuple of boolean and DeviceInfo. If found boolean is true and DeviceInfo     describes the device. Otherwise false
    static method
    method
    method
    method
    method
    bootMemory(self, fw: collections.abc.Sequence [ typing.SupportsInt ])
    Boots a custom FW in memory  Parameter ``fw``:     $Throws:  A runtime exception if there are any communication issues
    method
    bootUsbRomBootloader(self)
    Boots into integrated ROM bootloader in USB mode  Throws:     A runtime exception if there are any communication issues
    method
    close(self)
    Closes the connection to device. Better alternative is the usage of context manager: `with depthai.DeviceBootloader(deviceInfo) as bootloader:`
    method
    method
    flashBootHeader(self, memory: DeviceBootloader.Memory, frequency: typing.SupportsInt = -1, location: typing.SupportsInt = -1, dummyCycles: typing.SupportsInt = -1, offset: typing.SupportsInt = -1) -> tuple[bool, str]: tuple[bool, str]
    Flash optimized boot header  Parameter ``memory``:     Which memory to flasht the header to  Parameter ``frequency``:     SPI specific parameter, frequency in MHz  Parameter ``location``:     Target location the header should boot to. Default to location of bootloader  Parameter ``dummyCycles``:     SPI specific parameter  Parameter ``offset``:     Offset in memory to flash the header to. Defaults to offset of boot header  Returns:     status as std::tuple<bool, std::string>
    method
    method
    flashClear(self, memory: DeviceBootloader.Memory = ...) -> tuple[bool, str]: tuple[bool, str]
    Clears flashed application on the device, by removing SBR boot structure Doesn't remove fast boot header capability to still boot the application
    method
    flashConfig(self, config: DeviceBootloader.Config, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> tuple[bool, str]: tuple[bool, str]
    Flashes configuration to bootloader  Parameter ``configData``:     Configuration structure  Parameter ``memory``:     Optional - to which memory flash configuration  Parameter ``type``:     Optional - for which type of bootloader to flash configuration
    method
    flashConfigClear(self, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> tuple[bool, str]: tuple[bool, str]
    Clears configuration data  Parameter ``memory``:     Optional - on which memory to clear configuration data  Parameter ``type``:     Optional - for which type of bootloader to clear configuration data
    method
    flashConfigData(self, configData: json, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> tuple[bool, str]: tuple[bool, str]
    Flashes configuration data to bootloader  Parameter ``configData``:     Unstructured configuration data  Parameter ``memory``:     Optional - to which memory flash configuration  Parameter ``type``:     Optional - for which type of bootloader to flash configuration
    method
    flashConfigFile(self, configData: Path, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> tuple[bool, str]: tuple[bool, str]
    Flashes configuration data to bootloader  Parameter ``configPath``:     Unstructured configuration data  Parameter ``memory``:     Optional - to which memory flash configuration  Parameter ``type``:     Optional - for which type of bootloader to flash configuration
    method
    method
    method
    flashFastBootHeader(self, memory: DeviceBootloader.Memory, frequency: typing.SupportsInt = -1, location: typing.SupportsInt = -1, dummyCycles: typing.SupportsInt = -1, offset: typing.SupportsInt = -1) -> tuple[bool, str]: tuple[bool, str]
    Flash fast boot header. Application must already be present in flash, or location must be specified manually. Note - Can soft brick your device if firmware location changes.  Parameter ``memory``:     Which memory to flash the header to  Parameter ``frequency``:     SPI specific parameter, frequency in MHz  Parameter ``location``:     Target location the header should boot to. Default to location of bootloader  Parameter ``dummyCycles``:     SPI specific parameter  Parameter ``offset``:     Offset in memory to flash the header to. Defaults to offset of boot header  Returns:     status as std::tuple<bool, std::string>
    method
    flashGpioModeBootHeader(self, memory: DeviceBootloader.Memory, mode: typing.SupportsInt) -> tuple[bool, str]: tuple[bool, str]
    Flash boot header which boots same as equivalent GPIO mode would  Parameter ``gpioMode``:     GPIO mode equivalent
    method
    flashUsbRecoveryBootHeader(self, memory: DeviceBootloader.Memory) -> tuple[bool, str]: tuple[bool, str]
    Flash USB recovery boot header. Switches to USB ROM Bootloader  Parameter ``memory``:     Which memory to flash the header to
    method
    flashUserBootloader(self, progressCallback: collections.abc.Callable [ [ typing.SupportsFloat ] , None ], path: Path = '') -> tuple[bool, str]: tuple[bool, str]
    Flashes user bootloader to the current board. Available for NETWORK bootloader type  Parameter ``progressCallback``:     Callback that sends back a value between 0..1 which signifies current     flashing progress  Parameter ``path``:     Optional parameter to custom bootloader to flash
    method
    getFlashedVersion(self) -> depthai.Version|None: depthai.Version|None
    Returns:     Version of the bootloader that is flashed on the device. Nullopt when the     version could not be retrieved because the device was in X_LINK_UNBOOTED     state before booting the bootloader.
    method
    getMemoryInfo(self, arg0: DeviceBootloader.Memory) -> DeviceBootloader.MemoryInfo: DeviceBootloader.MemoryInfo
    Retrieves information about specified memory  Parameter ``memory``:     Specifies which memory to query
    method
    getType(self) -> DeviceBootloader.Type: DeviceBootloader.Type
    Returns:     Type of currently connected bootloader
    method
    getVersion(self) -> Version: Version
    Returns:     Version of current running bootloader
    method
    isAllowedFlashingBootloader(self) -> bool: bool
    Returns:     True if allowed to flash bootloader
    method
    isEmbeddedVersion(self) -> bool: bool
    Returns:     True when bootloader was booted using latest bootloader integrated in the     library. False when bootloader is already running on the device and just     connected to.
    method
    isUserBootloader(self) -> bool: bool
    Retrieves whether current bootloader is User Bootloader (B out of A/B configuration)
    method
    isUserBootloaderSupported(self) -> bool: bool
    Checks whether User Bootloader is supported with current bootloader  Returns:     true of User Bootloader is supported, false otherwise
    method
    readApplicationInfo(self, memory: DeviceBootloader.Memory) -> DeviceBootloader.ApplicationInfo: DeviceBootloader.ApplicationInfo
    Reads information about flashed application in specified memory from device  Parameter ``memory``:     Specifies which memory to query
    method
    readConfig(self, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> DeviceBootloader.Config: DeviceBootloader.Config
    Reads configuration from bootloader  Parameter ``memory``:     Optional - from which memory to read configuration  Parameter ``type``:     Optional - from which type of bootloader to read configuration  Returns:     Configuration structure
    method
    readConfigData(self, memory: DeviceBootloader.Memory = ..., type: DeviceBootloader.Type = ...) -> json: json
    Reads configuration data from bootloader  Returns:     Unstructured configuration data  Parameter ``memory``:     Optional - from which memory to read configuration data  Parameter ``type``:     Optional - from which type of bootloader to read configuration data
    method
    class

    depthai.DeviceBootloader.ApplicationInfo

    class

    depthai.DeviceBootloader.MemoryInfo

    class

    depthai.EdgeDetectorConfig(depthai.Buffer)

    method
    method
    get(self) -> RawEdgeDetectorConfig: RawEdgeDetectorConfig
    Retrieve configuration data for EdgeDetector.  Returns:     config for EdgeDetector
    method
    getConfigData(self) -> EdgeDetectorConfigData: EdgeDetectorConfigData
    Retrieve configuration data for EdgeDetector  Returns:     EdgeDetectorConfigData: sobel filter horizontal and vertical 3x3 kernels
    method
    set(self, config: RawEdgeDetectorConfig) -> EdgeDetectorConfig: EdgeDetectorConfig
    Set explicit configuration.  Parameter ``config``:     Explicit configuration
    method
    setSobelFilterKernels(self, horizontalKernel: collections.abc.Sequence [ collections.abc.Sequence [ typing.SupportsInt ] ], verticalKernel: collections.abc.Sequence [ collections.abc.Sequence [ typing.SupportsInt ] ])
    Set sobel filter horizontal and vertical 3x3 kernels  Parameter ``horizontalKernel``:     Used for horizontal gradient computation in 3x3 Sobel filter  Parameter ``verticalKernel``:     Used for vertical gradient computation in 3x3 Sobel filter
    class

    depthai.EdgeDetectorConfigData

    method
    property
    sobelFilterHorizontalKernel
    Used for horizontal gradient computation in 3x3 Sobel filter Format - 3x3 matrix, 2nd column must be 0 Default - +1 0 -1; +2 0 -2; +1 0 -1
    method
    property
    sobelFilterVerticalKernel
    Used for vertical gradient computation in 3x3 Sobel filter Format - 3x3 matrix, 2nd row must be 0 Default - +1 +2 +1; 0 0 0; -1 -2 -1
    method
    class

    depthai.EdgeDetectorProperties

    property
    initialConfig
    Initial edge detector config
    method
    property
    numFramesPool
    Num frames in output pool
    method
    property
    outputFrameSize
    Maximum output frame size in bytes (eg: 300x300 BGR image -> 300*300*3 bytes)
    method
    class

    depthai.EncodedFrame(depthai.Buffer)

    class
    FrameType
    Members:    I    P    B    Unknown
    class
    Profile
    Members:    JPEG    AVC    HEVC
    method
    method
    getBitrate(self) -> int: int
    Retrieves the encoding bitrate
    method
    getColorTemperature(self) -> int: int
    Retrieves white-balance color temperature of the light source, in kelvins
    method
    method
    method
    getHeight(self) -> int: int
    Retrieves image height in pixels
    method
    getInstanceNum(self) -> int: int
    Retrieves instance number
    method
    getLensPosition(self) -> int: int
    Retrieves lens position, range 0..255. Returns -1 if not available
    method
    getLensPositionRaw(self) -> float: float
    Retrieves lens position, range 0.0f..1.0f. Returns -1 if not available
    method
    getLossless(self) -> bool: bool
    Returns true if encoding is lossless (JPEG only)
    method
    getProfile(self) -> RawEncodedFrame.Profile: RawEncodedFrame.Profile
    Retrieves the encoding profile (JPEG, AVC or HEVC)
    method
    getQuality(self) -> int: int
    Retrieves the encoding quality
    method
    getSensitivity(self) -> int: int
    Retrieves sensitivity, as an ISO value
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    getTimestamp(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp related to dai::Clock::now()
    method
    getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
    method
    getWidth(self) -> int: int
    Retrieves image width in pixels
    method
    method
    method
    setHeight(self, arg0: typing.SupportsInt) -> EncodedFrame: EncodedFrame
    Specifies frame height  Parameter ``height``:     frame height
    method
    setLossless(self, arg0: bool) -> EncodedFrame: EncodedFrame
    Returns true if encoding is lossless (JPEG only)
    method
    method
    method
    setSequenceNum(self, arg0: typing.SupportsInt) -> EncodedFrame: EncodedFrame
    Specifies sequence number  Parameter ``seq``:     Sequence number
    method
    setTimestamp(self, arg0: datetime.timedelta) -> EncodedFrame: EncodedFrame
    Retrieves image timestamp related to dai::Clock::now()
    method
    method
    setWidth(self, arg0: typing.SupportsInt) -> EncodedFrame: EncodedFrame
    Specifies frame width  Parameter ``width``:     frame width
    class

    depthai.FeatureTrackerConfig(depthai.Buffer)

    class
    CornerDetector
    Corner detector configuration structure.
    class
    FeatureMaintainer
    FeatureMaintainer configuration structure.
    class
    MotionEstimator
    Used for feature reidentification between current and previous features.
    method
    method
    get(self) -> RawFeatureTrackerConfig: RawFeatureTrackerConfig
    Retrieve configuration data for FeatureTracker.  Returns:     config for feature tracking algorithm
    method
    set(self, config: RawFeatureTrackerConfig) -> FeatureTrackerConfig: FeatureTrackerConfig
    Set explicit configuration.  Parameter ``config``:     Explicit configuration
    method
    method
    method
    setHwMotionEstimation(self) -> FeatureTrackerConfig: FeatureTrackerConfig
    Set hardware accelerated motion estimation using block matching. Faster than optical flow (software implementation) but might not be as accurate.
    method
    method
    setNumTargetFeatures(self, numTargetFeatures: typing.SupportsInt) -> FeatureTrackerConfig: FeatureTrackerConfig
    Set number of target features to detect.  Parameter ``numTargetFeatures``:     Number of features
    method
    class

    depthai.FeatureTrackerConfig.CornerDetector

    class
    Thresholds
    Threshold settings structure for corner detector.
    class
    Type
    Members:    HARRIS    SHI_THOMASI
    method
    property
    cellGridDimension
    Ensures distributed feature detection across the image. Image is divided into horizontal and vertical cells, each cell has a target feature count = numTargetFeatures / cellGridDimension. Each cell has its own feature threshold. A value of 4 means that the image is divided into 4x4 cells of equal width/height. Maximum 4, minimum 1.
    method
    property
    enableSobel
    Enable 3x3 Sobel operator to smoothen the image whose gradient is to be computed. If disabled, a simple 1D row/column differentiator is used for gradient.
    method
    property
    enableSorting
    Enable sorting detected features based on their score or not.
    method
    property
    numMaxFeatures
    Hard limit for the maximum number of features that can be detected. 0 means auto, will be set to the maximum value based on memory constraints.
    method
    property
    numTargetFeatures
    Target number of features to detect. Maximum number of features is determined at runtime based on algorithm type.
    method
    property
    thresholds
    Threshold settings. These are advanced settings, suitable for debugging/special cases.
    method
    property
    type
    Corner detector algorithm type.
    method
    class

    depthai.FeatureTrackerConfig.CornerDetector.Thresholds

    method
    property
    decreaseFactor
    When detected number of features exceeds the maximum in a cell threshold is lowered by multiplying its value with this factor.
    method
    property
    increaseFactor
    When detected number of features doesn't exceed the maximum in a cell, threshold is increased by multiplying its value with this factor.
    method
    property
    initialValue
    Minimum strength of a feature which will be detected. 0 means automatic threshold update. Recommended so the tracker can adapt to different scenes/textures. Each cell has its own threshold. Empirical value.
    method
    property
    max
    Maximum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto. Empirical value.
    method
    property
    min
    Minimum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto, 6000000 for HARRIS, 1200 for SHI_THOMASI. Empirical value.
    method
    class

    depthai.FeatureTrackerConfig.FeatureMaintainer

    method
    property
    enable
    Enable feature maintaining or not.
    method
    property
    lostFeatureErrorThreshold
    Optical flow measures the tracking error for every feature. If the point can’t be tracked or it’s out of the image it will set this error to a maximum value. This threshold defines the level where the tracking accuracy is considered too bad to keep the point.
    method
    property
    minimumDistanceBetweenFeatures
    Used to filter out detected feature points that are too close. Requires sorting enabled in detector. Unit of measurement is squared euclidean distance in pixels.
    method
    property
    trackedFeatureThreshold
    Once a feature was detected and we started tracking it, we need to update its Harris score on each image. This is needed because a feature point can disappear, or it can become too weak to be tracked. This threshold defines the point where such a feature must be dropped. As the goal of the algorithm is to provide longer tracks, we try to add strong points and track them until they are absolutely untrackable. This is why, this value is usually smaller than the detection threshold.
    method
    class

    depthai.FeatureTrackerConfig.MotionEstimator

    class
    OpticalFlow
    Optical flow configuration structure.
    class
    Type
    Members:    LUCAS_KANADE_OPTICAL_FLOW    HW_MOTION_ESTIMATION
    method
    property
    enable
    Enable motion estimation or not.
    method
    property
    opticalFlow
    Optical flow configuration. Takes effect only if MotionEstimator algorithm type set to LUCAS_KANADE_OPTICAL_FLOW.
    method
    property
    type
    Motion estimator algorithm type.
    method
    class

    depthai.FeatureTrackerConfig.MotionEstimator.OpticalFlow

    method
    property
    epsilon
    Feature tracking termination criteria. Optical flow will refine the feature position on each pyramid level until the displacement between two refinements is smaller than this value. Decreasing this number increases runtime.
    method
    property
    maxIterations
    Feature tracking termination criteria. Optical flow will refine the feature position maximum this many times on each pyramid level. If the Epsilon criteria described in the previous chapter is not met after this number of iterations, the algorithm will continue with the current calculated value. Increasing this number increases runtime.
    method
    property
    pyramidLevels
    Number of pyramid levels, only for optical flow. AUTO means it's decided based on input resolution: 3 if image width <= 640, else 4. Valid values are either 3/4 for VGA, 4 for 720p and above.
    method
    property
    searchWindowHeight
    Image patch height used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime
    method
    property
    searchWindowWidth
    Image patch width used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime
    method
    class

    depthai.FeatureTrackerProperties

    property
    initialConfig
    Initial feature tracker config
    method
    property
    numMemorySlices
    Number of memory slices reserved for feature tracking. Optical flow can use 1 or 2 memory slices, while for corner detection only 1 is enough. Maximum number of features depends on the number of allocated memory slices. Hardware motion estimation doesn't require memory slices. Maximum 2, minimum 1.
    method
    property
    numShaves
    Number of shaves reserved for feature tracking. Optical flow can use 1 or 2 shaves, while for corner detection only 1 is enough. Hardware motion estimation doesn't require shaves. Maximum 2, minimum 1.
    method
    class

    depthai.GlobalProperties

    variable
    variable
    property
    cameraTuningBlobSize
    Camera tuning blob size in bytes
    method
    property
    cameraTuningBlobUri
    Uri which points to camera tuning blob
    method
    property
    method
    property
    method
    property
    sippBufferSize
    SIPP (Signal Image Processing Pipeline) internal memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. By default memory is allocated in high speed CMX memory. Setting to 0 will allocate in DDR 256 kilobytes. Units are bytes.
    method
    property
    sippDmaBufferSize
    SIPP (Signal Image Processing Pipeline) internal DMA memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. Memory is allocated in high speed CMX memory Units are bytes.
    method
    property
    xlinkChunkSize
    Chunk size for splitting device-sent XLink packets, in bytes. A larger value could increase performance, with 0 disabling chunking. A negative value won't modify the device defaults - configured per protocol, currently 64*1024 for both USB and Ethernet.
    method
    class

    depthai.IMUData(depthai.Buffer)

    class

    depthai.IMUReportAccelerometer(depthai.IMUReport)

    class

    depthai.IMUReportGyroscope(depthai.IMUReport)

    class

    depthai.IMUReportMagneticField(depthai.IMUReport)

    class

    depthai.ImageAlignConfig(depthai.Buffer)

    method
    method
    get(self) -> RawImageAlignConfig: RawImageAlignConfig
    Retrieve configuration data for SpatialLocationCalculator.  Returns:     config for SpatialLocationCalculator
    method
    set(self, config: RawImageAlignConfig) -> ImageAlignConfig: ImageAlignConfig
    Set explicit configuration.  Parameter ``config``:     Explicit configuration
    class

    depthai.ImageAlignProperties

    class

    depthai.ImageManipConfig(depthai.Buffer)

    method
    method
    get(self) -> RawImageManipConfig: RawImageManipConfig
    Retrieve configuration data for ImageManip.  Returns:     config for ImageManip
    method
    getColormap(self) -> Colormap: Colormap
    Returns:     specified colormap
    method
    method
    getCropXMax(self) -> float: float
    Returns:     Bottom right X coordinate of crop region
    method
    getCropXMin(self) -> float: float
    Returns:     Top left X coordinate of crop region
    method
    getCropYMax(self) -> float: float
    Returns:     Bottom right Y coordinate of crop region
    method
    getCropYMin(self) -> float: float
    Returns:     Top left Y coordinate of crop region
    method
    method
    getInterpolation(self) -> Interpolation: Interpolation
    Retrieve which interpolation method to use
    method
    method
    getResizeHeight(self) -> int: int
    Returns:     Output image height
    method
    getResizeWidth(self) -> int: int
    Returns:     Output image width
    method
    isResizeThumbnail(self) -> bool: bool
    Returns:     True if resize thumbnail mode is set, false otherwise
    method
    set(self, config: RawImageManipConfig) -> ImageManipConfig: ImageManipConfig
    Set explicit configuration.  Parameter ``config``:     Explicit configuration
    method
    setCenterCrop(self, ratio: typing.SupportsFloat, whRatio: typing.SupportsFloat = 1.0) -> ImageManipConfig: ImageManipConfig
    Specifies a centered crop.  Parameter ``ratio``:     Ratio between input image and crop region (0..1)  Parameter ``whRatio``:     Crop region aspect ratio - 1 equals to square, 1.7 equals to 16:9, ...
    method
    method
    method
    setCropRotatedRect(self, rr: RotatedRect, normalizedCoords: bool = True) -> ImageManipConfig: ImageManipConfig
    Specifies crop with rotated rectangle. Optionally as non normalized coordinates  Parameter ``rr``:     Rotated rectangle which specifies crop  Parameter ``normalizedCoords``:     If true coordinates are in normalized range (0..1) otherwise absolute
    method
    setFrameType(self, type: RawImgFrame.Type) -> ImageManipConfig: ImageManipConfig
    Specify output frame type.  Parameter ``name``:     Frame type
    method
    setHorizontalFlip(self, flip: bool) -> ImageManipConfig: ImageManipConfig
    Specify horizontal flip  Parameter ``flip``:     True to enable flip, false otherwise
    method
    setInterpolation(self, interpolation: Interpolation) -> ImageManipConfig: ImageManipConfig
    Specify which interpolation method to use  Parameter ``interpolation``:     type of interpolation
    method
    method
    method
    method
    setReusePreviousImage(self, reuse: bool) -> ImageManipConfig: ImageManipConfig
    Instruct ImageManip to not remove current image from its queue and use the same for next message.  Parameter ``reuse``:     True to enable reuse, false otherwise
    method
    setRotationDegrees(self, deg: typing.SupportsFloat) -> ImageManipConfig: ImageManipConfig
    Specifies clockwise rotation in degrees  Parameter ``deg``:     Rotation in degrees
    method
    setRotationRadians(self, rad: typing.SupportsFloat) -> ImageManipConfig: ImageManipConfig
    Specifies clockwise rotation in radians  Parameter ``rad``:     Rotation in radians
    method
    setSkipCurrentImage(self, skip: bool) -> ImageManipConfig: ImageManipConfig
    Instructs ImageManip to skip current image and wait for next in queue.  Parameter ``skip``:     True to skip current image, false otherwise
    method
    setVerticalFlip(self, flip: bool)
    Specify vertical flip  Parameter ``flip``:     True to enable vertical flip, false otherwise
    method
    setWarpBorderFillColor(self, red: typing.SupportsInt, green: typing.SupportsInt, blue: typing.SupportsInt) -> ImageManipConfig: ImageManipConfig
    Specifies fill color for border pixels. Example:  - setWarpBorderFillColor(255,255,255) -> white  - setWarpBorderFillColor(0,0,255) -> blue  Parameter ``red``:     Red component  Parameter ``green``:     Green component  Parameter ``blue``:     Blue component
    method
    method
    setWarpTransformFourPoints(self, pt: collections.abc.Sequence [ Point2f ], normalizedCoords: bool) -> ImageManipConfig: ImageManipConfig
    Specifies warp by supplying 4 points in either absolute or normalized coordinates  Parameter ``pt``:     4 points specifying warp  Parameter ``normalizedCoords``:     If true pt is interpreted as normalized, absolute otherwise
    method
    class

    depthai.ImgDetections(depthai.Buffer)

    method
    __init__(self)
    Construct ImgDetections message
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    getTimestamp(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp related to dai::Clock::now()
    method
    getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
    method
    method
    method
    property
    detections
    Detections
    method
    class

    depthai.ImgFrame(depthai.Buffer)

    class
    class
    Type
    Members:    YUV422i    YUV444p    YUV420p    YUV422p    YUV400p    RGBA8888    RGB161616    RGB888p    BGR888p    RGB888i    BGR888i    RGBF16F16F16p    BGRF16F16F16p    RGBF16F16F16i    BGRF16F16F16i    GRAY8    GRAYF16    LUT2    LUT4    LUT16    RAW16    RAW14    RAW12    RAW10    RAW8    PACK10    PACK12    YUV444i    NV12    NV21    BITSTREAM    HDR    NONE
    method
    method
    getCategory(self) -> int: int
    Retrieves image category
    method
    getColorTemperature(self) -> int: int
    Retrieves white-balance color temperature of the light source, in kelvins
    method
    getCvFrame(self) -> typing.Any: typing.Any
    Returns BGR or grayscale frame compatible with use in other opencv functions
    method
    method
    getFrame(self, copy: bool = False) -> numpy.ndarray: numpy.ndarray
    Returns numpy array with shape as specified by width, height and type
    method
    getHeight(self) -> int: int
    Retrieves image height in pixels
    method
    getInstanceNum(self) -> int: int
    Retrieves instance number
    method
    getLensPosition(self) -> int: int
    Retrieves lens position, range 0..255. Returns -1 if not available
    method
    getLensPositionRaw(self) -> float: float
    Retrieves lens position, range 0.0f..1.0f. Returns -1 if not available
    method
    getSensitivity(self) -> int: int
    Retrieves sensitivity, as an ISO value
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    method
    method
    method
    getWidth(self) -> int: int
    Retrieves image width in pixels
    method
    method
    setFrame(self, array: numpy.ndarray)
    Copies array bytes to ImgFrame buffer
    method
    setHeight(self, height: typing.SupportsInt) -> ImgFrame: ImgFrame
    Specifies frame height  Parameter ``height``:     frame height
    method
    setInstanceNum(self, instance: typing.SupportsInt) -> ImgFrame: ImgFrame
    Instance number relates to the origin of the frame (which camera)  Parameter ``instance``:     Instance number
    method
    setSequenceNum(self, seq: typing.SupportsInt) -> ImgFrame: ImgFrame
    Specifies sequence number  Parameter ``seq``:     Sequence number
    method
    method
    setTimestamp(self, timestamp: datetime.timedelta) -> ImgFrame: ImgFrame
    Retrieves image timestamp related to dai::Clock::now()
    method
    setTimestampDevice(self, arg0: datetime.timedelta) -> ImgFrame: ImgFrame
    Sets image timestamp related to dai::Clock::now()
    method
    setType(self, type: RawImgFrame.Type) -> ImgFrame: ImgFrame
    Specifies frame type, RGB, BGR, ...  Parameter ``type``:     Type of image
    method
    setWidth(self, width: typing.SupportsInt) -> ImgFrame: ImgFrame
    Specifies frame width  Parameter ``width``:     frame width
    class

    depthai.ImgFrame.Type

    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    property
    property
    class

    depthai.MessageDemuxProperties

    property
    processor
    Which processor should execute the node.
    method
    class

    depthai.MessageGroup(depthai.Buffer)

    method
    method
    method
    method
    method
    getIntervalNs(self) -> int: int
    Retrieves interval between the first and the last message in the group.
    method
    getMessageNames(self) -> list[str]: list[str]
    Gets the names of messages in the group
    method
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    getTimestamp(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp related to dai::Clock::now()
    method
    getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
    method
    isSynced(self, arg0: typing.SupportsInt) -> bool: bool
    True if all messages in the group are in the interval  Parameter ``thresholdNs``:     Maximal interval between messages
    method
    method
    method
    class

    depthai.NNData(depthai.Buffer)

    method
    __init__(self)
    Construct NNData message.
    method
    getAllLayerNames(self) -> list[str]: list[str]
    Returns:     Names of all layers added
    method
    getAllLayers(self) -> list[TensorInfo]: list[TensorInfo]
    Returns:     All layers and their information
    method
    getFirstLayerFp16(self) -> list[float]: list[float]
    Convenience function to retrieve float values from first layers FP16 tensor  Returns:     Float data
    method
    getFirstLayerInt32(self) -> list[int]: list[int]
    Convenience function to retrieve INT32 values from first layers tensor  Returns:     INT32 data
    method
    getFirstLayerUInt8(self) -> list[int]: list[int]
    Convenience function to retrieve U8 data from first layer  Returns:     U8 binary data
    method
    getLayer(self, name: str, tensor: TensorInfo) -> bool: bool
    Retrieve layers tensor information  Parameter ``name``:     Name of the layer  Parameter ``tensor``:     Outputs tensor information of that layer  Returns:     True if layer exists, false otherwise
    method
    getLayerDatatype(self, name: str, datatype: TensorInfo.DataType) -> bool: bool
    Retrieve datatype of a layers tensor  Parameter ``name``:     Name of the layer  Parameter ``datatype``:     Datatype of layers tensor  Returns:     True if layer exists, false otherwise
    method
    getLayerFp16(self, name: str) -> list[float]: list[float]
    Convenience function to retrieve float values from layers FP16 tensor  Parameter ``name``:     Name of the layer  Returns:     Float data
    method
    getLayerInt32(self, name: str) -> list[int]: list[int]
    Convenience function to retrieve INT32 values from layers tensor  Parameter ``name``:     Name of the layer  Returns:     INT32 data
    method
    getLayerUInt8(self, name: str) -> list[int]: list[int]
    Convenience function to retrieve U8 data from layer  Parameter ``name``:     Name of the layer  Returns:     U8 binary data
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    getTimestamp(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp related to dai::Clock::now()
    method
    getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
    method
    hasLayer(self, name: str) -> bool: bool
    Checks if given layer exists  Parameter ``name``:     Name of the layer  Returns:     True if layer exists, false otherwise
    method
    method
    method
    setTimestamp(self, arg0: datetime.timedelta) -> NNData: NNData
    Sets image timestamp related to dai::Clock::now()
    method
    setTimestampDevice(self, arg0: datetime.timedelta) -> NNData: NNData
    Sets image timestamp related to dai::Clock::now()
    class

    depthai.Node

    class
    Connection
    Connection between an Input and Output
    class
    class
    Id
    Node identificator. Unique for every node on a single Pipeline
    class
    class
    class
    class
    variable
    method
    method
    method
    method
    getName(self) -> str: str
    Retrieves nodes name
    method
    method
    method
    property
    id
    Id of node
    class

    depthai.Node.DatatypeHierarchy

    class

    depthai.Node.Input

    class
    Type
    Members:    SReceiver    MReceiver
    variable
    variable
    variable
    variable
    method
    getBlocking(self) -> bool: bool
    Get input queue behavior  Returns:     True blocking, false overwriting
    method
    method
    getQueueSize(self) -> int: int
    Get input queue size.  Returns:     Maximum input queue size
    method
    getReusePreviousMessage(self) -> bool: bool
    Equivalent to getWaitForMessage but with inverted logic.
    method
    getWaitForMessage(self) -> bool: bool
    Get behavior whether to wait for this input when a Node processes certain data or not  Returns:     Whether to wait for message to arrive to this input or not
    method
    setBlocking(self, blocking: bool)
    Overrides default input queue behavior.  Parameter ``blocking``:     True blocking, false overwriting
    method
    setQueueSize(self, size: typing.SupportsInt)
    Overrides default input queue size. If queue size fills up, behavior depends on `blocking` attribute  Parameter ``size``:     Maximum input queue size
    method
    setReusePreviousMessage(self, reusePreviousMessage: bool)
    Equivalent to setWaitForMessage but with inverted logic.
    method
    setWaitForMessage(self, waitForMessage: bool)
    Overrides default wait for message behavior. Applicable for nodes with multiple inputs. Specifies behavior whether to wait for this input when a Node processes certain data or not.  Parameter ``waitForMessage``:     Whether to wait for message to arrive to this input or not
    property
    method
    class

    depthai.Node.Output

    class
    Type
    Members:    MSender    SSender
    variable
    variable
    variable
    method
    canConnect(self, input: Node.Input) -> bool: bool
    Check if connection is possible  Parameter ``in``:     Input to connect to  Returns:     True if connection is possible, false otherwise
    method
    getConnections(self) -> list[Node.Connection]: list[Node.Connection]
    Retrieve all connections from this output  Returns:     Vector of connections
    method
    method
    isSamePipeline(self, input: Node.Input) -> bool: bool
    Check if this output and given input are on the same pipeline.  See also:     canConnect for checking if connection is possible  Returns:     True if output and input are on the same pipeline
    method
    method
    property
    method
    class

    depthai.ObjectTrackerConfig(depthai.Buffer)

    class

    depthai.ObjectTrackerProperties

    property
    detectionLabelsToTrack
    Which detections labels to track. Default all labels are tracked.
    method
    property
    maxObjectsToTrack
    Maximum number of objects to track. Maximum 60 for SHORT_TERM_KCF, maximum 1000 for other tracking methods. Default 60.
    method
    property
    occlusionRatioThreshold
    Occlusion ratio threshold. Used to filter out overlapping tracklets.
    method
    property
    trackerIdAssignmentPolicy
    New ID assignment policy.
    method
    property
    trackerThreshold
    Confidence threshold for tracklets. Above this threshold detections will be tracked. Default 0, all detections are tracked.
    method
    property
    trackerType
    Tracking method.
    method
    property
    trackingPerClass
    Whether tracker should take into consideration class label for tracking.
    method
    property
    trackletBirthThreshold
    Tracklet birth threshold. Minimum consecutive tracked frames required to consider a tracklet as a new instance.
    method
    property
    trackletMaxLifespan
    Tracklet lifespan in number of frames. Number of frames after which a LOST tracklet is removed.
    method
    class

    depthai.OpenVINO

    class
    Blob
    OpenVINO Blob
    class
    Version
    OpenVINO Version supported version information  Members:    VERSION_2020_3    VERSION_2020_4    VERSION_2021_1    VERSION_2021_2    VERSION_2021_3    VERSION_2021_4    VERSION_2022_1    VERSION_UNIVERSAL
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    static method
    static method
    OpenVINO.getBlobLatestSupportedVersion()
    Returns latest potentially supported version by a given blob version.  Parameter ``majorVersion``:     Major version from OpenVINO blob  Parameter ``minorVersion``:     Minor version from OpenVINO blob  Returns:     Latest potentially supported version
    static method
    OpenVINO.getBlobSupportedVersions()
    Returns a list of potentially supported versions for a specified blob major and minor versions.  Parameter ``majorVersion``:     Major version from OpenVINO blob  Parameter ``minorVersion``:     Minor version from OpenVINO blob  Returns:     Vector of potentially supported versions
    static method
    OpenVINO.getVersionName(version: OpenVINO.Version) -> str: str
    Returns string representation of a given version  Parameter ``version``:     OpenVINO version  Returns:     Name of a given version
    static method
    static method
    OpenVINO.parseVersionName(versionString: str) -> OpenVINO.Version: OpenVINO.Version
    Creates Version from string representation. Throws if not possible.  Parameter ``versionString``:     Version as string  Returns:     Version object if successful
    class

    depthai.OpenVINO.Blob

    method
    property
    data
    Blob data
    method
    property
    networkInputs
    Map of input names to additional information
    method
    property
    networkOutputs
    Map of output names to additional information
    method
    property
    numShaves
    Number of shaves the blob was compiled for
    method
    property
    numSlices
    Number of CMX slices the blob was compiled for
    method
    property
    stageCount
    Number of network stages
    method
    property
    version
    OpenVINO version
    method
    class

    depthai.Pipeline

    method
    __init__(self)
    Constructs a new pipeline
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    getCalibrationData(self) -> CalibrationHandler: CalibrationHandler
    gets the calibration data which is set through pipeline  Returns:     the calibrationHandler with calib data in the pipeline
    method
    method
    method
    getDeviceConfig(self) -> Device.Config: Device.Config
    Get device configuration needed for this pipeline
    method
    getGlobalProperties(self) -> GlobalProperties: GlobalProperties
    Returns:     Global properties of current pipeline
    method
    method
    getNodeMap(self) -> dict[int, Node]: dict[int, Node]
    Get a reference to internal node map
    method
    getOpenVINOVersion(self) -> OpenVINO.Version: OpenVINO.Version
    Get possible OpenVINO version to run this pipeline
    method
    method
    method
    remove(self, node: Node)
    Removes a node from pipeline
    method
    serializeToJson(self) -> json: json
    Returns whole pipeline represented as JSON
    method
    method
    setCalibrationData(self, calibrationDataHandler: CalibrationHandler)
    Sets the calibration in pipeline which overrides the calibration data in eeprom  Parameter ``calibrationDataHandler``:     CalibrationHandler object which is loaded with calibration information.
    method
    setCameraTuningBlobPath(self, path: Path)
    Set a camera IQ (Image Quality) tuning blob, used for all cameras
    method
    setOpenVINOVersion(self, version: OpenVINO.Version)
    Set a specific OpenVINO version to use with this pipeline
    method
    setSippBufferSize(self, sizeBytes: typing.SupportsInt)
    SIPP (Signal Image Processing Pipeline) internal memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. By default memory is allocated in high speed CMX memory. Setting to 0 will allocate in DDR 256 kilobytes. Units are bytes.
    method
    setSippDmaBufferSize(self, sizeBytes: typing.SupportsInt)
    SIPP (Signal Image Processing Pipeline) internal DMA memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. Memory is allocated in high speed CMX memory Units are bytes.
    method
    setXLinkChunkSize(self, sizeBytes: typing.SupportsInt)
    Set chunk size for splitting device-sent XLink packets, in bytes. A larger value could increase performance, with 0 disabling chunking. A negative value won't modify the device defaults - configured per protocol, currently 64*1024 for both USB and Ethernet.
    method
    class

    depthai.PointCloudConfig(depthai.Buffer)

    method
    method
    get(self) -> RawPointCloudConfig: RawPointCloudConfig
    Retrieve configuration data for SpatialLocationCalculator.  Returns:     config for SpatialLocationCalculator
    method
    getSparse(self) -> bool: bool
    Retrieve sparse point cloud calculation status.  Returns:     true if sparse point cloud calculation is enabled, false otherwise
    method
    method
    set(self, config: RawPointCloudConfig) -> PointCloudConfig: PointCloudConfig
    Set explicit configuration.  Parameter ``config``:     Explicit configuration
    method
    setSparse(self, arg0: bool) -> PointCloudConfig: PointCloudConfig
    Enable or disable sparse point cloud calculation.  Parameter ``enable``:
    method
    class

    depthai.PointCloudData(depthai.Buffer)

    method
    method
    getHeight(self) -> int: int
    Retrieves the height in pixels - in case of a sparse point cloud, this represents the hight of the frame which was used to generate the point cloud
    method
    getInstanceNum(self) -> int: int
    Retrieves instance number
    method
    getMaxX(self) -> float: float
    Retrieves maximal x coordinate in depth units (millimeter by default)
    method
    getMaxY(self) -> float: float
    Retrieves maximal y coordinate in depth units (millimeter by default)
    method
    getMaxZ(self) -> float: float
    Retrieves maximal z coordinate in depth units (millimeter by default)
    method
    getMinX(self) -> float: float
    Retrieves minimal x coordinate in depth units (millimeter by default)
    method
    getMinY(self) -> float: float
    Retrieves minimal y coordinate in depth units (millimeter by default)
    method
    getMinZ(self) -> float: float
    Retrieves minimal z coordinate in depth units (millimeter by default)
    method
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    getTimestamp(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp related to dai::Clock::now()
    method
    getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
    method
    getWidth(self) -> int: int
    Retrieves the height in pixels - in case of a sparse point cloud, this represents the hight of the frame which was used to generate the point cloud
    method
    isSparse(self) -> bool: bool
    Retrieves whether point cloud is sparse
    method
    setHeight(self, arg0: typing.SupportsInt) -> PointCloudData: PointCloudData
    Specifies frame height  Parameter ``height``:     frame height
    method
    setInstanceNum(self, arg0: typing.SupportsInt) -> PointCloudData: PointCloudData
    Instance number relates to the origin of the frame (which camera)  Parameter ``instance``:     Instance number
    method
    setMaxX(self, arg0: typing.SupportsFloat) -> PointCloudData: PointCloudData
    Specifies maximal x coordinate in depth units (millimeter by default)  Parameter ``val``:     maximal x coordinate in depth units (millimeter by default)
    method
    setMaxY(self, arg0: typing.SupportsFloat) -> PointCloudData: PointCloudData
    Specifies maximal y coordinate in depth units (millimeter by default)  Parameter ``val``:     maximal y coordinate in depth units (millimeter by default)
    method
    setMaxZ(self, arg0: typing.SupportsFloat) -> PointCloudData: PointCloudData
    Specifies maximal z coordinate in depth units (millimeter by default)  Parameter ``val``:     maximal z coordinate in depth units (millimeter by default)
    method
    setMinX(self, arg0: typing.SupportsFloat) -> PointCloudData: PointCloudData
    Specifies minimal x coordinate in depth units (millimeter by default)  Parameter ``val``:     minimal x coordinate in depth units (millimeter by default)
    method
    setMinY(self, arg0: typing.SupportsFloat) -> PointCloudData: PointCloudData
    Specifies minimal y coordinate in depth units (millimeter by default)  Parameter ``val``:     minimal y coordinate in depth units (millimeter by default)
    method
    setMinZ(self, arg0: typing.SupportsFloat) -> PointCloudData: PointCloudData
    Specifies minimal z coordinate in depth units (millimeter by default)  Parameter ``val``:     minimal z coordinate in depth units (millimeter by default)
    method
    setSequenceNum(self, arg0: typing.SupportsInt) -> PointCloudData: PointCloudData
    Specifies sequence number  Parameter ``seq``:     Sequence number
    method
    method
    setTimestamp(self, arg0: datetime.timedelta) -> PointCloudData: PointCloudData
    Retrieves image timestamp related to dai::Clock::now()
    method
    method
    setWidth(self, arg0: typing.SupportsInt) -> PointCloudData: PointCloudData
    Specifies frame width  Parameter ``width``:     frame width
    property
    method
    class

    depthai.RawAprilTagConfig(depthai.RawBuffer)

    class
    Family
    Supported AprilTag families.  Members:    TAG_36H11    TAG_36H10    TAG_25H9    TAG_16H5    TAG_CIR21H7    TAG_STAND41H12
    class
    QuadThresholds
    AprilTag quad threshold parameters.
    method
    property
    decodeSharpening
    How much sharpening should be done to decoded images? This can help decode small tags but may or may not help in odd lighting conditions or low light conditions. The default value is 0.25.
    method
    property
    family
    AprilTag family.
    method
    property
    maxHammingDistance
    Max number of error bits that should be corrected. Accepting large numbers of corrected errors leads to greatly increased false positive rates. As of this implementation, the detector cannot detect tags with a hamming distance greater than 2.
    method
    property
    quadDecimate
    Detection of quads can be done on a lower-resolution image, improving speed at a cost of pose accuracy and a slight decrease in detection rate. Decoding the binary payload is still done at full resolution.
    method
    property
    quadSigma
    What Gaussian blur should be applied to the segmented image. Parameter is the standard deviation in pixels. Very noisy images benefit from non-zero values (e.g. 0.8).
    method
    property
    quadThresholds
    AprilTag quad threshold parameters.
    method
    property
    refineEdges
    When non-zero, the edges of the each quad are adjusted to "snap to" strong gradients nearby. This is useful when decimation is employed, as it can increase the quality of the initial quad estimate substantially. Generally recommended to be on. Very computationally inexpensive. Option is ignored if quadDecimate = 1.
    method
    class

    depthai.RawAprilTagConfig.QuadThresholds

    method
    property
    criticalDegree
    Reject quads where pairs of edges have angles that are close to straight or close to 180 degrees. Zero means that no quads are rejected. (In degrees).
    method
    property
    deglitch
    Should the thresholded image be deglitched? Only useful for very noisy images
    method
    property
    maxLineFitMse
    When fitting lines to the contours, what is the maximum mean squared error allowed? This is useful in rejecting contours that are far from being quad shaped; rejecting these quads "early" saves expensive decoding processing.
    method
    property
    maxNmaxima
    How many corner candidates to consider when segmenting a group of pixels into a quad.
    method
    property
    minClusterPixels
    Reject quads containing too few pixels.
    method
    property
    minWhiteBlackDiff
    When we build our model of black & white pixels, we add an extra check that the white model must be (overall) brighter than the black model. How much brighter? (in pixel values: [0,255]).
    method
    class

    depthai.RawCameraControl(depthai.RawBuffer)

    class
    AntiBandingMode
    Members:    OFF    MAINS_50_HZ    MAINS_60_HZ    AUTO
    class
    AutoFocusMode
    Members:    OFF    AUTO    MACRO    CONTINUOUS_VIDEO    CONTINUOUS_PICTURE    EDOF
    class
    AutoWhiteBalanceMode
    Members:    OFF    AUTO    INCANDESCENT    FLUORESCENT    WARM_FLUORESCENT    DAYLIGHT    CLOUDY_DAYLIGHT    TWILIGHT    SHADE
    class
    CaptureIntent
    Members:    CUSTOM    PREVIEW    STILL_CAPTURE    VIDEO_RECORD    VIDEO_SNAPSHOT    ZERO_SHUTTER_LAG
    class
    Command
    Members:    START_STREAM    STOP_STREAM    STILL_CAPTURE    MOVE_LENS    AF_TRIGGER    AE_MANUAL    AE_AUTO    AWB_MODE    SCENE_MODE    ANTIBANDING_MODE    EXPOSURE_COMPENSATION    AE_LOCK    AE_TARGET_FPS_RANGE    AWB_LOCK    CAPTURE_INTENT    CONTROL_MODE    FRAME_DURATION    SENSITIVITY    EFFECT_MODE    AF_MODE    NOISE_REDUCTION_STRENGTH    SATURATION    BRIGHTNESS    STREAM_FORMAT    RESOLUTION    SHARPNESS    CUSTOM_USECASE    CUSTOM_CAPT_MODE    CUSTOM_EXP_BRACKETS    CUSTOM_CAPTURE    CONTRAST    AE_REGION    AF_REGION    LUMA_DENOISE    CHROMA_DENOISE    WB_COLOR_TEMP
    class
    ControlMode
    Members:    OFF    AUTO    USE_SCENE_MODE
    class
    EffectMode
    Members:    OFF    MONO    NEGATIVE    SOLARIZE    SEPIA    POSTERIZE    WHITEBOARD    BLACKBOARD    AQUA
    class
    FrameSyncMode
    Members:    OFF    OUTPUT    INPUT
    class
    SceneMode
    Members:    UNSUPPORTED    FACE_PRIORITY    ACTION    PORTRAIT    LANDSCAPE    NIGHT    NIGHT_PORTRAIT    THEATRE    BEACH    SNOW    SUNSET    STEADYPHOTO    FIREWORKS    SPORTS    PARTY    CANDLELIGHT    BARCODE
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    class

    depthai.RawCameraControl.Command

    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    property
    property
    class

    depthai.RawCameraControl.SceneMode

    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    property
    property
    class

    depthai.RawEdgeDetectorConfig(depthai.RawBuffer)

    class

    depthai.RawFeatureTrackerConfig(depthai.RawBuffer)

    class
    CornerDetector
    Corner detector configuration structure.
    class
    FeatureMaintainer
    FeatureMaintainer configuration structure.
    class
    MotionEstimator
    Used for feature reidentification between current and previous features.
    method
    property
    cornerDetector
    Corner detector configuration. Used for feature detection.
    method
    property
    featureMaintainer
    FeatureMaintainer configuration. Used for feature maintaining.
    method
    property
    motionEstimator
    Motion estimator configuration. Used for feature reidentification between current and previous features.
    method
    class

    depthai.RawFeatureTrackerConfig.CornerDetector

    class
    Thresholds
    Threshold settings structure for corner detector.
    class
    Type
    Members:    HARRIS    SHI_THOMASI
    method
    property
    cellGridDimension
    Ensures distributed feature detection across the image. Image is divided into horizontal and vertical cells, each cell has a target feature count = numTargetFeatures / cellGridDimension. Each cell has its own feature threshold. A value of 4 means that the image is divided into 4x4 cells of equal width/height. Maximum 4, minimum 1.
    method
    property
    enableSobel
    Enable 3x3 Sobel operator to smoothen the image whose gradient is to be computed. If disabled, a simple 1D row/column differentiator is used for gradient.
    method
    property
    enableSorting
    Enable sorting detected features based on their score or not.
    method
    property
    numMaxFeatures
    Hard limit for the maximum number of features that can be detected. 0 means auto, will be set to the maximum value based on memory constraints.
    method
    property
    numTargetFeatures
    Target number of features to detect. Maximum number of features is determined at runtime based on algorithm type.
    method
    property
    thresholds
    Threshold settings. These are advanced settings, suitable for debugging/special cases.
    method
    property
    type
    Corner detector algorithm type.
    method
    class

    depthai.RawFeatureTrackerConfig.CornerDetector.Thresholds

    method
    property
    decreaseFactor
    When detected number of features exceeds the maximum in a cell threshold is lowered by multiplying its value with this factor.
    method
    property
    increaseFactor
    When detected number of features doesn't exceed the maximum in a cell, threshold is increased by multiplying its value with this factor.
    method
    property
    initialValue
    Minimum strength of a feature which will be detected. 0 means automatic threshold update. Recommended so the tracker can adapt to different scenes/textures. Each cell has its own threshold. Empirical value.
    method
    property
    max
    Maximum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto. Empirical value.
    method
    property
    min
    Minimum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto, 6000000 for HARRIS, 1200 for SHI_THOMASI. Empirical value.
    method
    class

    depthai.RawFeatureTrackerConfig.FeatureMaintainer

    method
    property
    enable
    Enable feature maintaining or not.
    method
    property
    lostFeatureErrorThreshold
    Optical flow measures the tracking error for every feature. If the point can’t be tracked or it’s out of the image it will set this error to a maximum value. This threshold defines the level where the tracking accuracy is considered too bad to keep the point.
    method
    property
    minimumDistanceBetweenFeatures
    Used to filter out detected feature points that are too close. Requires sorting enabled in detector. Unit of measurement is squared euclidean distance in pixels.
    method
    property
    trackedFeatureThreshold
    Once a feature was detected and we started tracking it, we need to update its Harris score on each image. This is needed because a feature point can disappear, or it can become too weak to be tracked. This threshold defines the point where such a feature must be dropped. As the goal of the algorithm is to provide longer tracks, we try to add strong points and track them until they are absolutely untrackable. This is why, this value is usually smaller than the detection threshold.
    method
    class

    depthai.RawFeatureTrackerConfig.MotionEstimator

    class
    OpticalFlow
    Optical flow configuration structure.
    class
    Type
    Members:    LUCAS_KANADE_OPTICAL_FLOW    HW_MOTION_ESTIMATION
    method
    property
    enable
    Enable motion estimation or not.
    method
    property
    opticalFlow
    Optical flow configuration. Takes effect only if MotionEstimator algorithm type set to LUCAS_KANADE_OPTICAL_FLOW.
    method
    property
    type
    Motion estimator algorithm type.
    method
    class

    depthai.RawFeatureTrackerConfig.MotionEstimator.OpticalFlow

    method
    property
    epsilon
    Feature tracking termination criteria. Optical flow will refine the feature position on each pyramid level until the displacement between two refinements is smaller than this value. Decreasing this number increases runtime.
    method
    property
    maxIterations
    Feature tracking termination criteria. Optical flow will refine the feature position maximum this many times on each pyramid level. If the Epsilon criteria described in the previous chapter is not met after this number of iterations, the algorithm will continue with the current calculated value. Increasing this number increases runtime.
    method
    property
    pyramidLevels
    Number of pyramid levels, only for optical flow. AUTO means it's decided based on input resolution: 3 if image width <= 640, else 4. Valid values are either 3/4 for VGA, 4 for 720p and above.
    method
    property
    searchWindowHeight
    Image patch height used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime
    method
    property
    searchWindowWidth
    Image patch width used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime
    method
    class

    depthai.RawIMUData(depthai.RawBuffer)

    class

    depthai.RawImageAlignConfig(depthai.RawBuffer)

    method
    property
    staticDepthPlane
    Optional static depth plane to align to, in depth units, by default millimeters
    method
    class

    depthai.RawImageManipConfig.FormatConfig

    class

    depthai.RawImgFrame(depthai.RawBuffer)

    class
    class
    Type
    Members:    YUV422i    YUV444p    YUV420p    YUV422p    YUV400p    RGBA8888    RGB161616    RGB888p    BGR888p    RGB888i    BGR888i    RGBF16F16F16p    BGRF16F16F16p    RGBF16F16F16i    BGRF16F16F16i    GRAY8    GRAYF16    LUT2    LUT4    LUT16    RAW16    RAW14    RAW12    RAW10    RAW8    PACK10    PACK12    YUV444i    NV12    NV21    BITSTREAM    HDR    NONE
    variable
    method
    property
    method
    property
    method
    property
    method
    property
    method
    property
    method
    class

    depthai.RawImgFrame.Type

    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    property
    property
    class

    depthai.RawMessageGroup(depthai.RawBuffer)

    class

    depthai.RawObjectTrackerConfig(depthai.RawBuffer)

    method
    property
    trackletIdsToRemove
    Tracklet IDs to remove from tracking. Tracklet will transition to REMOVED state.
    method
    class

    depthai.RawStereoDepthConfig(depthai.RawBuffer)

    class
    class
    CensusTransform
    The basic cost function used by the Stereo Accelerator for matching the left and right images is the Census Transform. It works on a block of pixels and computes a bit vector which represents the structure of the image in that block. There are two types of Census Transform based on how the middle pixel is used: Classic Approach and Modified Census. The comparisons that are made between pixels can be or not thresholded. In some cases a mask can be applied to filter out only specific bits from the entire bit stream. All these approaches are: Classic Approach: Uses middle pixel to compare against all its neighbors over a defined window. Each comparison results in a new bit, that is 0 if central pixel is smaller, or 1 if is it bigger than its neighbor. Modified Census Transform: same as classic Census Transform, but instead of comparing central pixel with its neighbors, the window mean will be compared with each pixel over the window. Thresholding Census Transform: same as classic Census Transform, but it is not enough that a neighbor pixel to be bigger than the central pixel, it must be significant bigger (based on a threshold). Census Transform with Mask: same as classic Census Transform, but in this case not all of the pixel from the support window are part of the binary descriptor. We use a ma sk “M” to define which pixels are part of the binary descriptor (1), and which pixels should be skipped (0).
    class
    CostAggregation
    Cost Aggregation is based on Semi Global Block Matching (SGBM). This algorithm uses a semi global technique to aggregate the cost map. Ultimately the idea is to build inertia into the stereo algorithm. If a pixel has very little texture information, then odds are the correct disparity for this pixel is close to that of the previous pixel considered. This means that we get improved results in areas with low texture.
    class
    CostMatching
    The matching cost is way of measuring the similarity of image locations in stereo correspondence algorithm. Based on the configuration parameters and based on the descriptor type, a linear equation is applied to computing the cost for each candidate disparity at each pixel.
    class
    MedianFilter
    Median filter config  Members:    MEDIAN_OFF    KERNEL_3x3    KERNEL_5x5    KERNEL_7x7
    class
    PostProcessing
    Post-processing filters, all the filters are applied in disparity domain.
    method
    property
    algorithmControl
    Controls the flow of stereo algorithm - left-right check, subpixel etc.
    method
    property
    censusTransform
    Census transform settings.
    method
    property
    costAggregation
    Cost aggregation settings.
    method
    property
    costMatching
    Cost matching settings.
    method
    property
    postProcessing
    Controls the postprocessing of disparity and/or depth map.
    method
    class

    depthai.RawStereoDepthConfig.AlgorithmControl

    class
    DepthAlign
    Align the disparity/depth to the perspective of a rectified output, or center it  Members:    RECTIFIED_RIGHT :     RECTIFIED_LEFT :     CENTER : 
    class
    DepthUnit
    Measurement unit for depth data  Members:    METER :     CENTIMETER :     MILLIMETER :     INCH :     FOOT :     CUSTOM : 
    method
    property
    centerAlignmentShiftFactor
    Used only for debug purposes. centerAlignmentShiftFactor is set automatically in firmware, from camera extrinsics when depth alignment to camera is enabled. Center alignment is achieved by shifting the obtained disparity map by a scale factor. It's used to align to a different camera that is on the same horizontal baseline as the two stereo cameras. E.g. if we have a device with 10 cm stereo baseline, and we have another camera inbetween, 9cm from the LEFT camera and 1 cm from the RIGHT camera we can align the obtained disparity map using a scale factor of 0.9. Note that aligning disparity map to a different camera involves 2 steps: 1. Shifting obtained disparity map. 2. Warping the image to counter rotate and scaling to match the FOV. Center alignment factor 1 is equivalent to RECTIFIED_RIGHT Center alignment factor 0 is equivalent to RECTIFIED_LEFT
    method
    property
    customDepthUnitMultiplier
    Custom depth unit multiplier, if custom depth unit is enabled, relative to 1 meter. A multiplier of 1000 effectively means depth unit in millimeter.
    method
    property
    depthAlign
    Set the disparity/depth alignment to the perspective of a rectified output, or center it
    method
    property
    depthUnit
    Measurement unit for depth data. Depth data is integer value, multiple of depth unit.
    method
    property
    disparityShift
    Shift input frame by a number of pixels to increase minimum depth. For example shifting by 48 will change effective disparity search range from (0,95] to [48,143]. An alternative approach to reducing the minZ. We normally only recommend doing this when it is known that there will be no objects farther away than MaxZ, such as having a depth camera mounted above a table pointing down at the table surface.
    method
    property
    enableExtended
    Disparity range increased from 95 to 190, combined from full resolution and downscaled images. Suitable for short range objects
    method
    property
    enableLeftRightCheck
    Computes and combines disparities in both L-R and R-L directions, and combine them. For better occlusion handling
    method
    property
    enableSubpixel
    Computes disparity with sub-pixel interpolation (5 fractional bits), suitable for long range
    method
    property
    leftRightCheckThreshold
    Left-right check threshold for left-right, right-left disparity map combine, 0..128 Used only when left-right check mode is enabled. Defines the maximum difference between the confidence of pixels from left-right and right-left confidence maps
    method
    property
    numInvalidateEdgePixels
    Invalidate X amount of pixels at the edge of disparity frame. For right and center alignment X pixels will be invalidated from the right edge, for left alignment from the left edge.
    method
    property
    subpixelFractionalBits
    Number of fractional bits for subpixel mode  Valid values: 3,4,5  Defines the number of fractional disparities: 2^x  Median filter postprocessing is supported only for 3 fractional bits
    method
    class

    depthai.RawStereoDepthConfig.CensusTransform

    class
    KernelSize
    Census transform kernel size possible values.  Members:    AUTO :     KERNEL_5x5 :     KERNEL_7x7 :     KERNEL_7x9 : 
    method
    property
    enableMeanMode
    If enabled, each pixel in the window is compared with the mean window value instead of the central pixel.
    method
    property
    kernelMask
    Census transform mask, default - auto, mask is set based on resolution and kernel size. Disabled for 400p input resolution. Enabled for 720p. 0XA82415 for 5x5 census transform kernel. 0XAA02A8154055 for 7x7 census transform kernel. 0X2AA00AA805540155 for 7x9 census transform kernel. Empirical values.
    method
    property
    kernelSize
    Census transform kernel size.
    method
    property
    threshold
    Census transform comparison threshold value.
    method
    class

    depthai.RawStereoDepthConfig.CostAggregation

    class

    depthai.RawStereoDepthConfig.CostMatching

    class
    DisparityWidth
    Disparity search range: 64 or 96 pixels are supported by the HW.  Members:    DISPARITY_64 :     DISPARITY_96 : 
    class
    LinearEquationParameters
    The linear equation applied for computing the cost is: COMB_COST = α*AD + β*(CTC<<3). CLAMP(COMB_COST >> 5, threshold). Where AD is the Absolute Difference between 2 pixels values. CTC is the Census Transform Cost between 2 pixels, based on Hamming distance (xor). The α and β parameters are subject to fine tuning by the user.
    method
    property
    confidenceThreshold
    Disparities with confidence value under this threshold are accepted. Higher confidence threshold means disparities with less confidence are accepted too.
    method
    property
    disparityWidth
    Disparity search range, default 96 pixels.
    method
    property
    enableCompanding
    Disparity companding using sparse matching. Matching pixel by pixel for N disparities. Matching every 2nd pixel for M disparitites. Matching every 4th pixel for T disparities. In case of 96 disparities: N=48, M=32, T=16. This way the search range is extended to 176 disparities, by sparse matching. Note: when enabling this flag only depth map will be affected, disparity map is not.
    method
    property
    invalidDisparityValue
    Used only for debug purposes, SW postprocessing handled only invalid value of 0 properly.
    method
    property
    linearEquationParameters
    Cost calculation linear equation parameters.
    method
    class

    depthai.RawStereoDepthConfig.CostMatching.LinearEquationParameters

    class

    depthai.RawStereoDepthConfig.PostProcessing

    class
    BrightnessFilter
    Brightness filtering. If input frame pixel is too dark or too bright, disparity will be invalidated. The idea is that for too dark/too bright pixels we have low confidence, since that area was under/over exposed and details were lost.
    class
    DecimationFilter
    Decimation filter. Reduces the depth scene complexity. The filter runs on kernel sizes [2x2] to [8x8] pixels.
    class
    Filter
    Members:    DECIMATION :     SPECKLE :     MEDIAN :     TEMPORAL :     SPATIAL :     FILTER_COUNT : 
    class
    SpatialFilter
    1D edge-preserving spatial filter using high-order domain transform.
    class
    SpeckleFilter
    Speckle filtering. Removes speckle noise.
    class
    TemporalFilter
    Temporal filtering with optional persistence.
    class
    ThresholdFilter
    Threshold filtering. Filters out distances outside of a given interval.
    method
    property
    bilateralSigmaValue
    Sigma value for bilateral filter. 0 means disabled. A larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together.
    method
    property
    brightnessFilter
    Brightness filtering. If input frame pixel is too dark or too bright, disparity will be invalidated. The idea is that for too dark/too bright pixels we have low confidence, since that area was under/over exposed and details were lost.
    method
    property
    decimationFilter
    Decimation filter. Reduces disparity/depth map x/y complexity, reducing runtime complexity for other filters.
    method
    property
    filteringOrder
    Order of filters to be applied if filtering is enabled.
    method
    property
    median
    Set kernel size for disparity/depth median filtering, or disable
    method
    property
    spatialFilter
    Edge-preserving filtering: This type of filter will smooth the depth noise while attempting to preserve edges.
    method
    property
    speckleFilter
    Speckle filtering. Removes speckle noise.
    method
    property
    temporalFilter
    Temporal filtering with optional persistence.
    method
    property
    thresholdFilter
    Threshold filtering. Filters out distances outside of a given interval.
    method
    class

    depthai.RawStereoDepthConfig.PostProcessing.BrightnessFilter

    method
    property
    maxBrightness
    Maximum range in depth units. If input pixel is less or equal than this value the depth value is invalidated.
    method
    property
    minBrightness
    Minimum pixel brightness. If input pixel is less or equal than this value the depth value is invalidated.
    method
    class

    depthai.RawStereoDepthConfig.PostProcessing.SpatialFilter

    method
    property
    alpha
    The Alpha factor in an exponential moving average with Alpha=1 - no filter. Alpha = 0 - infinite filter. Determines the amount of smoothing.
    method
    property
    delta
    Step-size boundary. Establishes the threshold used to preserve "edges". If the disparity value between neighboring pixels exceed the disparity threshold set by this delta parameter, then filtering will be temporarily disabled. Default value 0 means auto: 3 disparity integer levels. In case of subpixel mode it's 3*number of subpixel levels.
    method
    property
    enable
    Whether to enable or disable the filter.
    method
    property
    holeFillingRadius
    An in-place heuristic symmetric hole-filling mode applied horizontally during the filter passes. Intended to rectify minor artefacts with minimal performance impact. Search radius for hole filling.
    method
    property
    numIterations
    Number of iterations over the image in both horizontal and vertical direction.
    method
    class

    depthai.RawStereoDepthConfig.PostProcessing.SpeckleFilter

    method
    property
    differenceThreshold
    Maximum difference between neighbor disparity pixels to put them into the same blob. Units in disparity integer levels.
    method
    property
    enable
    Whether to enable or disable the filter.
    method
    property
    speckleRange
    Speckle search range.
    method
    class

    depthai.RawStereoDepthConfig.PostProcessing.TemporalFilter

    class
    PersistencyMode
    Persistency algorithm type.  Members:    PERSISTENCY_OFF :     VALID_8_OUT_OF_8 :     VALID_2_IN_LAST_3 :     VALID_2_IN_LAST_4 :     VALID_2_OUT_OF_8 :     VALID_1_IN_LAST_2 :     VALID_1_IN_LAST_5 :     VALID_1_IN_LAST_8 :     PERSISTENCY_INDEFINITELY : 
    method
    property
    alpha
    The Alpha factor in an exponential moving average with Alpha=1 - no filter. Alpha = 0 - infinite filter. Determines the extent of the temporal history that should be averaged.
    method
    property
    delta
    Step-size boundary. Establishes the threshold used to preserve surfaces (edges). If the disparity value between neighboring pixels exceed the disparity threshold set by this delta parameter, then filtering will be temporarily disabled. Default value 0 means auto: 3 disparity integer levels. In case of subpixel mode it's 3*number of subpixel levels.
    method
    property
    enable
    Whether to enable or disable the filter.
    method
    property
    persistencyMode
    Persistency mode. If the current disparity/depth value is invalid, it will be replaced by an older value, based on persistency mode.
    method
    class

    depthai.RawStereoDepthConfig.PostProcessing.TemporalFilter.PersistencyMode

    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    property
    property
    class

    depthai.RawStereoDepthConfig.PostProcessing.ThresholdFilter

    method
    property
    maxRange
    Maximum range in depth units. Depth values over this value are invalidated.
    method
    property
    minRange
    Minimum range in depth units. Depth values under this value are invalidated.
    method
    class

    depthai.RawTracklets(depthai.RawBuffer)

    class

    depthai.Rect

    method
    method
    area(self) -> float: float
    Area (width*height) of the rectangle
    method
    method
    contains(self, arg0: Point2f) -> bool: bool
    Checks whether the rectangle contains the point.
    method
    denormalize(self, width: typing.SupportsInt, height: typing.SupportsInt) -> Rect: Rect
    Denormalize rectangle.  Parameter ``destWidth``:     Destination frame width.  Parameter ``destHeight``:     Destination frame height.
    method
    empty(self) -> bool: bool
    True if rectangle is empty.
    method
    isNormalized(self) -> bool: bool
    Whether rectangle is normalized (coordinates in [0,1] range) or not.
    method
    normalize(self, width: typing.SupportsInt, height: typing.SupportsInt) -> Rect: Rect
    Normalize rectangle.  Parameter ``srcWidth``:     Source frame width.  Parameter ``srcHeight``:     Source frame height.
    method
    size(self) -> Size2f: Size2f
    Size (width, height) of the rectangle
    method
    property
    method
    property
    method
    property
    method
    property
    method
    class

    depthai.SPIOutProperties

    class

    depthai.ScriptProperties

    property
    processor
    Which processor should execute the script
    method
    property
    scriptName
    Name of script
    method
    property
    scriptUri
    Uri which points to actual script
    method
    class

    depthai.SpatialDetectionNetworkProperties(depthai.DetectionNetworkProperties)

    class

    depthai.SpatialImgDetection(depthai.ImgDetection)

    class

    depthai.SpatialImgDetections(depthai.Buffer)

    method
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    getTimestamp(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp related to dai::Clock::now()
    method
    getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
    method
    method
    method
    property
    method
    class

    depthai.SpatialLocationCalculatorConfig(depthai.Buffer)

    method
    method
    addROI(self, ROI: SpatialLocationCalculatorConfigData)
    Add a new ROI to configuration data.  Parameter ``roi``:     Configuration parameters for ROI (region of interest)
    method
    get(self) -> ...: ...
    Retrieve configuration data for SpatialLocationCalculator.  Returns:     config for SpatialLocationCalculator
    method
    getConfigData(self) -> list[SpatialLocationCalculatorConfigData]: list[SpatialLocationCalculatorConfigData]
    Retrieve configuration data for SpatialLocationCalculator  Returns:     Vector of configuration parameters for ROIs (region of interests)
    method
    set(self, config: ...) -> SpatialLocationCalculatorConfig: SpatialLocationCalculatorConfig
    Set explicit configuration.  Parameter ``config``:     Explicit configuration
    method
    setROIs(self, ROIs: collections.abc.Sequence [ SpatialLocationCalculatorConfigData ])
    Set a vector of ROIs as configuration data.  Parameter ``ROIs``:     Vector of configuration parameters for ROIs (region of interests)
    class

    depthai.SpatialLocationCalculatorConfigData

    method
    property
    calculationAlgorithm
    Calculation method used to obtain spatial locations Average/mean: the average of ROI is used for calculation. Min: the minimum value inside ROI is used for calculation. Max: the maximum value inside ROI is used for calculation. Mode: the most frequent value inside ROI is used for calculation. Median: the median value inside ROI is used for calculation. Default: median.
    method
    property
    depthThresholds
    Upper and lower thresholds for depth values to take into consideration.
    method
    property
    roi
    Region of interest for spatial location calculation.
    method
    class

    depthai.SpatialLocationCalculatorData(depthai.Buffer)

    method
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    getSpatialLocations(self) -> list[SpatialLocations]: list[SpatialLocations]
    Retrieve configuration data for SpatialLocationCalculatorData.  Returns:     Vector of spatial location data, carrying spatial information (X,Y,Z)
    method
    getTimestamp(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp related to dai::Clock::now()
    method
    getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
    method
    method
    method
    property
    method
    class

    depthai.SpatialLocationCalculatorProperties

    class

    depthai.SpatialLocations

    method
    property
    config
    Configuration for selected ROI
    method
    property
    depthAverage
    Average of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to AVERAGE or MIN oR MAX.
    method
    property
    depthAveragePixelCount
    Number of depth values used in calculations.
    method
    property
    depthMax
    Maximum of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to AVERAGE or MIN oR MAX.
    method
    property
    depthMedian
    Median of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to MEDIAN.
    method
    property
    depthMin
    Minimum of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to AVERAGE or MIN oR MAX.
    method
    property
    depthMode
    Most frequent of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to MODE.
    method
    property
    spatialCoordinates
    Spatial coordinates - x,y,z; x,y are the relative positions of the center of ROI to the center of depth map
    method
    class

    depthai.StereoDepthConfig(depthai.Buffer)

    class
    class
    CensusTransform
    The basic cost function used by the Stereo Accelerator for matching the left and right images is the Census Transform. It works on a block of pixels and computes a bit vector which represents the structure of the image in that block. There are two types of Census Transform based on how the middle pixel is used: Classic Approach and Modified Census. The comparisons that are made between pixels can be or not thresholded. In some cases a mask can be applied to filter out only specific bits from the entire bit stream. All these approaches are: Classic Approach: Uses middle pixel to compare against all its neighbors over a defined window. Each comparison results in a new bit, that is 0 if central pixel is smaller, or 1 if is it bigger than its neighbor. Modified Census Transform: same as classic Census Transform, but instead of comparing central pixel with its neighbors, the window mean will be compared with each pixel over the window. Thresholding Census Transform: same as classic Census Transform, but it is not enough that a neighbor pixel to be bigger than the central pixel, it must be significant bigger (based on a threshold). Census Transform with Mask: same as classic Census Transform, but in this case not all of the pixel from the support window are part of the binary descriptor. We use a ma sk “M” to define which pixels are part of the binary descriptor (1), and which pixels should be skipped (0).
    class
    CostAggregation
    Cost Aggregation is based on Semi Global Block Matching (SGBM). This algorithm uses a semi global technique to aggregate the cost map. Ultimately the idea is to build inertia into the stereo algorithm. If a pixel has very little texture information, then odds are the correct disparity for this pixel is close to that of the previous pixel considered. This means that we get improved results in areas with low texture.
    class
    CostMatching
    The matching cost is way of measuring the similarity of image locations in stereo correspondence algorithm. Based on the configuration parameters and based on the descriptor type, a linear equation is applied to computing the cost for each candidate disparity at each pixel.
    class
    MedianFilter
    Median filter config  Members:    MEDIAN_OFF    KERNEL_3x3    KERNEL_5x5    KERNEL_7x7
    class
    PostProcessing
    Post-processing filters, all the filters are applied in disparity domain.
    method
    method
    get(self) -> RawStereoDepthConfig: RawStereoDepthConfig
    Retrieve configuration data for StereoDepth.  Returns:     config for stereo depth algorithm
    method
    getBilateralFilterSigma(self) -> int: int
    Get sigma value for 5x5 bilateral filter
    method
    getConfidenceThreshold(self) -> int: int
    Get confidence threshold for disparity calculation
    method
    method
    getLeftRightCheckThreshold(self) -> int: int
    Get threshold for left-right check combine
    method
    getMaxDisparity(self) -> float: float
    Useful for normalization of the disparity map.  Returns:     Maximum disparity value that the node can return
    method
    method
    set(self, config: RawStereoDepthConfig) -> StereoDepthConfig: StereoDepthConfig
    Set explicit configuration.  Parameter ``config``:     Explicit configuration
    method
    setBilateralFilterSigma(self, sigma: typing.SupportsInt) -> StereoDepthConfig: StereoDepthConfig
    A larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together, resulting in larger areas of semi-equal color.  Parameter ``sigma``:     Set sigma value for 5x5 bilateral filter. 0..65535
    method
    setConfidenceThreshold(self, confThr: typing.SupportsInt) -> StereoDepthConfig: StereoDepthConfig
    Confidence threshold for disparity calculation  Parameter ``confThr``:     Confidence threshold value 0..255
    method
    setDepthAlign(self, align: RawStereoDepthConfig.AlgorithmControl.DepthAlign) -> StereoDepthConfig: StereoDepthConfig
    Parameter ``align``:     Set the disparity/depth alignment: centered (between the 'left' and 'right'     inputs), or from the perspective of a rectified output stream
    method
    setDepthUnit(self, arg0: RawStereoDepthConfig.AlgorithmControl.DepthUnit) -> StereoDepthConfig: StereoDepthConfig
    Set depth unit of depth map.  Meter, centimeter, millimeter, inch, foot or custom unit is available.
    method
    setDisparityShift(self, arg0: typing.SupportsInt) -> StereoDepthConfig: StereoDepthConfig
    Shift input frame by a number of pixels to increase minimum depth. For example shifting by 48 will change effective disparity search range from (0,95] to [48,143]. An alternative approach to reducing the minZ. We normally only recommend doing this when it is known that there will be no objects farther away than MaxZ, such as having a depth camera mounted above a table pointing down at the table surface.
    method
    setExtendedDisparity(self, enable: bool) -> StereoDepthConfig: StereoDepthConfig
    Disparity range increased from 95 to 190, combined from full resolution and downscaled images. Suitable for short range objects
    method
    setLeftRightCheck(self, enable: bool) -> StereoDepthConfig: StereoDepthConfig
    Computes and combines disparities in both L-R and R-L directions, and combine them.  For better occlusion handling, discarding invalid disparity values
    method
    setLeftRightCheckThreshold(self, sigma: typing.SupportsInt) -> StereoDepthConfig: StereoDepthConfig
    Parameter ``threshold``:     Set threshold for left-right, right-left disparity map combine, 0..255
    method
    setMedianFilter(self, median: MedianFilter) -> StereoDepthConfig: StereoDepthConfig
    Parameter ``median``:     Set kernel size for disparity/depth median filtering, or disable
    method
    setNumInvalidateEdgePixels(self, arg0: typing.SupportsInt) -> StereoDepthConfig: StereoDepthConfig
    Invalidate X amount of pixels at the edge of disparity frame. For right and center alignment X pixels will be invalidated from the right edge, for left alignment from the left edge.
    method
    setSubpixel(self, enable: bool) -> StereoDepthConfig: StereoDepthConfig
    Computes disparity with sub-pixel interpolation (3 fractional bits by default).  Suitable for long range. Currently incompatible with extended disparity
    method
    setSubpixelFractionalBits(self, subpixelFractionalBits: typing.SupportsInt) -> StereoDepthConfig: StereoDepthConfig
    Number of fractional bits for subpixel mode. Default value: 3. Valid values: 3,4,5. Defines the number of fractional disparities: 2^x. Median filter postprocessing is supported only for 3 fractional bits.
    class

    depthai.StereoDepthConfig.AlgorithmControl

    class
    DepthAlign
    Align the disparity/depth to the perspective of a rectified output, or center it  Members:    RECTIFIED_RIGHT :     RECTIFIED_LEFT :     CENTER : 
    class
    DepthUnit
    Measurement unit for depth data  Members:    METER :     CENTIMETER :     MILLIMETER :     INCH :     FOOT :     CUSTOM : 
    method
    property
    centerAlignmentShiftFactor
    Used only for debug purposes. centerAlignmentShiftFactor is set automatically in firmware, from camera extrinsics when depth alignment to camera is enabled. Center alignment is achieved by shifting the obtained disparity map by a scale factor. It's used to align to a different camera that is on the same horizontal baseline as the two stereo cameras. E.g. if we have a device with 10 cm stereo baseline, and we have another camera inbetween, 9cm from the LEFT camera and 1 cm from the RIGHT camera we can align the obtained disparity map using a scale factor of 0.9. Note that aligning disparity map to a different camera involves 2 steps: 1. Shifting obtained disparity map. 2. Warping the image to counter rotate and scaling to match the FOV. Center alignment factor 1 is equivalent to RECTIFIED_RIGHT Center alignment factor 0 is equivalent to RECTIFIED_LEFT
    method
    property
    customDepthUnitMultiplier
    Custom depth unit multiplier, if custom depth unit is enabled, relative to 1 meter. A multiplier of 1000 effectively means depth unit in millimeter.
    method
    property
    depthAlign
    Set the disparity/depth alignment to the perspective of a rectified output, or center it
    method
    property
    depthUnit
    Measurement unit for depth data. Depth data is integer value, multiple of depth unit.
    method
    property
    disparityShift
    Shift input frame by a number of pixels to increase minimum depth. For example shifting by 48 will change effective disparity search range from (0,95] to [48,143]. An alternative approach to reducing the minZ. We normally only recommend doing this when it is known that there will be no objects farther away than MaxZ, such as having a depth camera mounted above a table pointing down at the table surface.
    method
    property
    enableExtended
    Disparity range increased from 95 to 190, combined from full resolution and downscaled images. Suitable for short range objects
    method
    property
    enableLeftRightCheck
    Computes and combines disparities in both L-R and R-L directions, and combine them. For better occlusion handling
    method
    property
    enableSubpixel
    Computes disparity with sub-pixel interpolation (5 fractional bits), suitable for long range
    method
    property
    leftRightCheckThreshold
    Left-right check threshold for left-right, right-left disparity map combine, 0..128 Used only when left-right check mode is enabled. Defines the maximum difference between the confidence of pixels from left-right and right-left confidence maps
    method
    property
    numInvalidateEdgePixels
    Invalidate X amount of pixels at the edge of disparity frame. For right and center alignment X pixels will be invalidated from the right edge, for left alignment from the left edge.
    method
    property
    subpixelFractionalBits
    Number of fractional bits for subpixel mode  Valid values: 3,4,5  Defines the number of fractional disparities: 2^x  Median filter postprocessing is supported only for 3 fractional bits
    method
    class

    depthai.StereoDepthConfig.CensusTransform

    class
    KernelSize
    Census transform kernel size possible values.  Members:    AUTO :     KERNEL_5x5 :     KERNEL_7x7 :     KERNEL_7x9 : 
    method
    property
    enableMeanMode
    If enabled, each pixel in the window is compared with the mean window value instead of the central pixel.
    method
    property
    kernelMask
    Census transform mask, default - auto, mask is set based on resolution and kernel size. Disabled for 400p input resolution. Enabled for 720p. 0XA82415 for 5x5 census transform kernel. 0XAA02A8154055 for 7x7 census transform kernel. 0X2AA00AA805540155 for 7x9 census transform kernel. Empirical values.
    method
    property
    kernelSize
    Census transform kernel size.
    method
    property
    threshold
    Census transform comparison threshold value.
    method
    class

    depthai.StereoDepthConfig.CostAggregation

    class

    depthai.StereoDepthConfig.CostMatching

    class
    DisparityWidth
    Disparity search range: 64 or 96 pixels are supported by the HW.  Members:    DISPARITY_64 :     DISPARITY_96 : 
    class
    LinearEquationParameters
    The linear equation applied for computing the cost is: COMB_COST = α*AD + β*(CTC<<3). CLAMP(COMB_COST >> 5, threshold). Where AD is the Absolute Difference between 2 pixels values. CTC is the Census Transform Cost between 2 pixels, based on Hamming distance (xor). The α and β parameters are subject to fine tuning by the user.
    method
    property
    confidenceThreshold
    Disparities with confidence value under this threshold are accepted. Higher confidence threshold means disparities with less confidence are accepted too.
    method
    property
    disparityWidth
    Disparity search range, default 96 pixels.
    method
    property
    enableCompanding
    Disparity companding using sparse matching. Matching pixel by pixel for N disparities. Matching every 2nd pixel for M disparitites. Matching every 4th pixel for T disparities. In case of 96 disparities: N=48, M=32, T=16. This way the search range is extended to 176 disparities, by sparse matching. Note: when enabling this flag only depth map will be affected, disparity map is not.
    method
    property
    invalidDisparityValue
    Used only for debug purposes, SW postprocessing handled only invalid value of 0 properly.
    method
    property
    linearEquationParameters
    Cost calculation linear equation parameters.
    method
    class

    depthai.StereoDepthConfig.PostProcessing

    class
    BrightnessFilter
    Brightness filtering. If input frame pixel is too dark or too bright, disparity will be invalidated. The idea is that for too dark/too bright pixels we have low confidence, since that area was under/over exposed and details were lost.
    class
    DecimationFilter
    Decimation filter. Reduces the depth scene complexity. The filter runs on kernel sizes [2x2] to [8x8] pixels.
    class
    Filter
    Members:    DECIMATION :     SPECKLE :     MEDIAN :     TEMPORAL :     SPATIAL :     FILTER_COUNT : 
    class
    SpatialFilter
    1D edge-preserving spatial filter using high-order domain transform.
    class
    SpeckleFilter
    Speckle filtering. Removes speckle noise.
    class
    TemporalFilter
    Temporal filtering with optional persistence.
    class
    ThresholdFilter
    Threshold filtering. Filters out distances outside of a given interval.
    method
    property
    bilateralSigmaValue
    Sigma value for bilateral filter. 0 means disabled. A larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together.
    method
    property
    brightnessFilter
    Brightness filtering. If input frame pixel is too dark or too bright, disparity will be invalidated. The idea is that for too dark/too bright pixels we have low confidence, since that area was under/over exposed and details were lost.
    method
    property
    decimationFilter
    Decimation filter. Reduces disparity/depth map x/y complexity, reducing runtime complexity for other filters.
    method
    property
    filteringOrder
    Order of filters to be applied if filtering is enabled.
    method
    property
    median
    Set kernel size for disparity/depth median filtering, or disable
    method
    property
    spatialFilter
    Edge-preserving filtering: This type of filter will smooth the depth noise while attempting to preserve edges.
    method
    property
    speckleFilter
    Speckle filtering. Removes speckle noise.
    method
    property
    temporalFilter
    Temporal filtering with optional persistence.
    method
    property
    thresholdFilter
    Threshold filtering. Filters out distances outside of a given interval.
    method
    class

    depthai.StereoDepthConfig.PostProcessing.BrightnessFilter

    method
    property
    maxBrightness
    Maximum range in depth units. If input pixel is less or equal than this value the depth value is invalidated.
    method
    property
    minBrightness
    Minimum pixel brightness. If input pixel is less or equal than this value the depth value is invalidated.
    method
    class

    depthai.StereoDepthConfig.PostProcessing.DecimationFilter

    class
    DecimationMode
    Decimation algorithm type.  Members:    PIXEL_SKIPPING :     NON_ZERO_MEDIAN :     NON_ZERO_MEAN : 
    method
    property
    decimationFactor
    Decimation factor. Valid values are 1,2,3,4. Disparity/depth map x/y resolution will be decimated with this value.
    method
    property
    decimationMode
    Decimation algorithm type.
    method
    class

    depthai.StereoDepthConfig.PostProcessing.SpatialFilter

    method
    property
    alpha
    The Alpha factor in an exponential moving average with Alpha=1 - no filter. Alpha = 0 - infinite filter. Determines the amount of smoothing.
    method
    property
    delta
    Step-size boundary. Establishes the threshold used to preserve "edges". If the disparity value between neighboring pixels exceed the disparity threshold set by this delta parameter, then filtering will be temporarily disabled. Default value 0 means auto: 3 disparity integer levels. In case of subpixel mode it's 3*number of subpixel levels.
    method
    property
    enable
    Whether to enable or disable the filter.
    method
    property
    holeFillingRadius
    An in-place heuristic symmetric hole-filling mode applied horizontally during the filter passes. Intended to rectify minor artefacts with minimal performance impact. Search radius for hole filling.
    method
    property
    numIterations
    Number of iterations over the image in both horizontal and vertical direction.
    method
    class

    depthai.StereoDepthConfig.PostProcessing.SpeckleFilter

    method
    property
    differenceThreshold
    Maximum difference between neighbor disparity pixels to put them into the same blob. Units in disparity integer levels.
    method
    property
    enable
    Whether to enable or disable the filter.
    method
    property
    speckleRange
    Speckle search range.
    method
    class

    depthai.StereoDepthConfig.PostProcessing.TemporalFilter

    class
    PersistencyMode
    Persistency algorithm type.  Members:    PERSISTENCY_OFF :     VALID_8_OUT_OF_8 :     VALID_2_IN_LAST_3 :     VALID_2_IN_LAST_4 :     VALID_2_OUT_OF_8 :     VALID_1_IN_LAST_2 :     VALID_1_IN_LAST_5 :     VALID_1_IN_LAST_8 :     PERSISTENCY_INDEFINITELY : 
    method
    property
    alpha
    The Alpha factor in an exponential moving average with Alpha=1 - no filter. Alpha = 0 - infinite filter. Determines the extent of the temporal history that should be averaged.
    method
    property
    delta
    Step-size boundary. Establishes the threshold used to preserve surfaces (edges). If the disparity value between neighboring pixels exceed the disparity threshold set by this delta parameter, then filtering will be temporarily disabled. Default value 0 means auto: 3 disparity integer levels. In case of subpixel mode it's 3*number of subpixel levels.
    method
    property
    enable
    Whether to enable or disable the filter.
    method
    property
    persistencyMode
    Persistency mode. If the current disparity/depth value is invalid, it will be replaced by an older value, based on persistency mode.
    method
    class

    depthai.StereoDepthConfig.PostProcessing.TemporalFilter.PersistencyMode

    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    variable
    method
    method
    method
    method
    method
    method
    method
    method
    method
    method
    property
    property
    class

    depthai.StereoDepthConfig.PostProcessing.ThresholdFilter

    method
    property
    maxRange
    Maximum range in depth units. Depth values over this value are invalidated.
    method
    property
    minRange
    Minimum range in depth units. Depth values under this value are invalidated.
    method
    class

    depthai.StereoDepthProperties

    class
    DepthAlign
    Align the disparity/depth to the perspective of a rectified output, or center it  Members:    RECTIFIED_RIGHT :     RECTIFIED_LEFT :     CENTER : 
    class
    MedianFilter
    Median filter config  Members:    MEDIAN_OFF    KERNEL_3x3    KERNEL_5x5    KERNEL_7x7
    class
    property
    alphaScaling
    Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). On some high distortion lenses, and/or due to rectification (image rotated) invalid areas may appear even with alpha=0, in these cases alpha < 0.0 helps removing invalid areas. See getOptimalNewCameraMatrix from opencv for more details.
    method
    property
    baseline
    Override baseline from calibration. Used only in disparity to depth conversion. Units are centimeters.
    method
    property
    depthAlignCamera
    Which camera to align disparity/depth to. When configured (not AUTO), takes precedence over 'depthAlign'
    method
    property
    depthAlignmentUseSpecTranslation
    Use baseline information for depth alignment from specs (design data) or from calibration. Suitable for debugging. Utilizes calibrated value as default
    method
    property
    disparityToDepthUseSpecTranslation
    Use baseline information for disparity to depth conversion from specs (design data) or from calibration. Suitable for debugging. Utilizes calibrated value as default
    method
    property
    enableRectification
    Enable stereo rectification/dewarp or not. Useful to disable when replaying pre- recorded rectified frames.
    method
    property
    enableRuntimeStereoModeSwitch
    Whether to enable switching stereo modes at runtime or not. E.g. standard to subpixel, standard+LR-check to subpixel + LR-check. Note: It will allocate resources for worst cases scenario, should be enabled only if dynamic mode switch is required. Default value: false.
    method
    property
    focalLength
    Override focal length from calibration. Used only in disparity to depth conversion. Units are pixels.
    method
    property
    focalLengthFromCalibration
    Whether to use horizontal focal length from calibration intrinsics (fx) or calculate based on calibration FOV. Default value is true. If set to false it's calculated from FOV and image resolution: focalLength = calib.width / (2.f * tan(calib.fov / 2 / 180.f * pi));
    method
    property
    height
    Input frame height. Optional (taken from MonoCamera nodes if they exist)
    method
    property
    initialConfig
    Initial stereo config
    method
    property
    mesh
    Specify a direct warp mesh to be used for rectification, instead of intrinsics + extrinsic matrices
    method
    property
    numFramesPool
    Num frames in output pool
    method
    property
    numPostProcessingMemorySlices
    Number of memory slices reserved for stereo depth post processing. -1 means auto, memory will be allocated based on initial stereo settings and number of shaves. 0 means that it will reuse the memory slices assigned for main stereo algorithm. For optimal performance it's recommended to allocate more than 0, so post processing will run in parallel with main stereo algorithm. Minimum 1, maximum 6.
    method
    property
    numPostProcessingShaves
    Number of shaves reserved for stereo depth post processing. Post processing can use multiple shaves to increase performance. -1 means auto, resources will be allocated based on enabled filters. 0 means that it will reuse the shave assigned for main stereo algorithm. For optimal performance it's recommended to allocate more than 0, so post processing will run in parallel with main stereo algorithm. Minimum 1, maximum 10.
    method
    property
    outHeight
    Output disparity/depth height. Currently only used when aligning to RGB
    method
    property
    outKeepAspectRatio
    Whether to keep aspect ratio of the input (rectified) or not
    method
    property
    outWidth
    Output disparity/depth width. Currently only used when aligning to RGB
    method
    property
    rectificationUseSpecTranslation
    Obtain rectification matrices using spec translation (design data) or from calibration in calculations. Suitable for debugging. Default: false
    method
    property
    rectifyEdgeFillColor
    Fill color for missing data at frame edges - grayscale 0..255, or -1 to replicate pixels
    method
    property
    useHomographyRectification
    Use 3x3 homography matrix for stereo rectification instead of sparse mesh generated on device. Default behaviour is AUTO, for lenses with FOV over 85 degrees sparse mesh is used, otherwise 3x3 homography. If custom mesh data is provided through loadMeshData or loadMeshFiles this option is ignored. true: 3x3 homography matrix generated from calibration data is used for stereo rectification, can't correct lens distortion. false: sparse mesh is generated on-device from calibration data with mesh step specified with setMeshStep (Default: (16, 16)), can correct lens distortion. Implementation for generating the mesh is same as opencv's initUndistortRectifyMap function. Only the first 8 distortion coefficients are used from calibration data.
    method
    property
    width
    Input frame width. Optional (taken from MonoCamera nodes if they exist)
    method
    class

    depthai.StereoDepthProperties.RectificationMesh

    property
    meshLeftUri
    Uri which points to the mesh array for 'left' input rectification
    method
    property
    meshRightUri
    Uri which points to the mesh array for 'right' input rectification
    method
    property
    meshSize
    Mesh array size in bytes, for each of 'left' and 'right' (need to match)
    method
    property
    stepHeight
    Distance between mesh points, in the vertical direction
    method
    property
    stepWidth
    Distance between mesh points, in the horizontal direction
    method
    class

    depthai.SystemLoggerProperties

    class

    depthai.ToFConfig(depthai.Buffer)

    method
    method
    get(self) -> RawToFConfig: RawToFConfig
    Retrieve configuration data for ToF.  Returns:     config for feature tracking algorithm
    method
    set(self, config: RawToFConfig) -> ToFConfig: ToFConfig
    Set explicit configuration.  Parameter ``config``:     Explicit configuration
    method
    setMedianFilter(self, arg0: MedianFilter) -> ToFConfig: ToFConfig
    Parameter ``median``:     Set kernel size for median filtering, or disable
    class

    depthai.ToFProperties

    property
    initialConfig
    Initial ToF config
    method
    property
    numFramesPool
    Num frames in output pool
    method
    property
    numShaves
    Number of shaves reserved for ToF decoding.
    method
    property
    warpHwIds
    Warp HW IDs to use for undistortion, if empty, use auto/default
    method
    class

    depthai.TrackedFeature

    method
    property
    age
    Feature age in frames
    method
    property
    harrisScore
    Feature harris score
    method
    property
    id
    Feature ID. Persistent between frames if motion estimation is enabled.
    method
    property
    position
    x, y position of the detected feature
    method
    property
    trackingError
    Feature tracking error
    method
    class

    depthai.TrackedFeatures(depthai.Buffer)

    method
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    getTimestamp(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp related to dai::Clock::now()
    method
    getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
    method
    method
    method
    property
    method
    class

    depthai.Tracklets(depthai.Buffer)

    method
    method
    getSequenceNum(self) -> int: int
    Retrieves sequence number
    method
    getTimestamp(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp related to dai::Clock::now()
    method
    getTimestampDevice(self) -> datetime.timedelta: datetime.timedelta
    Retrieves timestamp directly captured from device's monotonic clock, not synchronized to host time. Used mostly for debugging
    method
    method
    setTimestamp(self, arg0: datetime.timedelta) -> Tracklets: Tracklets
    Sets image timestamp related to dai::Clock::now()
    method
    property
    tracklets
    Retrieve data for Tracklets.  Returns:     Vector of object tracker data, carrying tracking information.
    method
    class

    depthai.Version

    variable
    method
    method
    method
    method
    method
    __str__(self) -> str: str
    Convert Version to string
    method
    method
    getSemver(self) -> Version: Version
    Retrieves semver version (no build information)
    method
    toStringSemver(self) -> str: str
    Convert Version to semver (no build information) string
    class

    depthai.VideoEncoderProperties