FeatureTrackerConfig

This message is used to configure the FeatureTracker node. You can set the CornerDetector, FeatureMaintainer and MotionEstimator.

Reference

class depthai.FeatureTrackerConfig

FeatureTrackerConfig message. Carries config for feature tracking algorithm

class CornerDetector

Corner detector configuration structure.

class Thresholds

Threshold settings structure for corner detector.

property decreaseFactor

When detected number of features exceeds the maximum in a cell threshold is lowered by multiplying its value with this factor.

property increaseFactor

When detected number of features doesn’t exceed the maximum in a cell, threshold is increased by multiplying its value with this factor.

property initialValue

Minimum strength of a feature which will be detected. 0 means automatic threshold update. Recommended so the tracker can adapt to different scenes/textures. Each cell has its own threshold. Empirical value.

property max

Maximum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto. Empirical value.

property min

Minimum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto, 6000000 for HARRIS, 1200 for SHI_THOMASI. Empirical value.

class Type

Members:

HARRIS

SHI_THOMASI

property name
property cellGridDimension

Ensures distributed feature detection across the image. Image is divided into horizontal and vertical cells, each cell has a target feature count = numTargetFeatures / cellGridDimension. Each cell has its own feature threshold. A value of 4 means that the image is divided into 4x4 cells of equal width/height. Maximum 4, minimum 1.

property enableSobel

Enable 3x3 Sobel operator to smoothen the image whose gradient is to be computed. If disabled, a simple 1D row/column differentiator is used for gradient.

property enableSorting

Enable sorting detected features based on their score or not.

property numMaxFeatures

Hard limit for the maximum number of features that can be detected. 0 means auto, will be set to the maximum value based on memory constraints.

property numTargetFeatures

Target number of features to detect. Maximum number of features is determined at runtime based on algorithm type.

property thresholds

Threshold settings. These are advanced settings, suitable for debugging/special cases.

property type

Corner detector algorithm type.

class FeatureMaintainer

FeatureMaintainer configuration structure.

property enable

Enable feature maintaining or not.

property lostFeatureErrorThreshold

Optical flow measures the tracking error for every feature. If the point can’t be tracked or it’s out of the image it will set this error to a maximum value. This threshold defines the level where the tracking accuracy is considered too bad to keep the point.

property minimumDistanceBetweenFeatures

Used to filter out detected feature points that are too close. Requires sorting enabled in detector. Unit of measurement is squared euclidean distance in pixels.

property trackedFeatureThreshold

Once a feature was detected and we started tracking it, we need to update its Harris score on each image. This is needed because a feature point can disappear, or it can become too weak to be tracked. This threshold defines the point where such a feature must be dropped. As the goal of the algorithm is to provide longer tracks, we try to add strong points and track them until they are absolutely untrackable. This is why, this value is usually smaller than the detection threshold.

class MotionEstimator

Used for feature reidentification between current and previous features.

class OpticalFlow

Optical flow configuration structure.

property epsilon

Feature tracking termination criteria. Optical flow will refine the feature position on each pyramid level until the displacement between two refinements is smaller than this value. Decreasing this number increases runtime.

property maxIterations

Feature tracking termination criteria. Optical flow will refine the feature position maximum this many times on each pyramid level. If the Epsilon criteria described in the previous chapter is not met after this number of iterations, the algorithm will continue with the current calculated value. Increasing this number increases runtime.

property pyramidLevels

Number of pyramid levels, only for optical flow. AUTO means it’s decided based on input resolution: 3 if image width <= 640, else 4. Valid values are either 3/4 for VGA, 4 for 720p and above.

property searchWindowHeight

Image patch height used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime

property searchWindowWidth

Image patch width used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime

class Type

Members:

LUCAS_KANADE_OPTICAL_FLOW

HW_MOTION_ESTIMATION

property name
property enable

Enable motion estimation or not.

property opticalFlow

Optical flow configuration. Takes effect only if MotionEstimator algorithm type set to LUCAS_KANADE_OPTICAL_FLOW.

property type

Motion estimator algorithm type.

get(self: depthai.FeatureTrackerConfig)depthai.RawFeatureTrackerConfig

Retrieve configuration data for FeatureTracker.

Returns

config for feature tracking algorithm

getData(self: object) → numpy.ndarray[numpy.uint8]

Get non-owning reference to internal buffer

Returns

Reference to internal buffer

getRaw(self: depthai.ADatatype)depthai.RawBuffer
getSequenceNum(self: depthai.Buffer)int

Retrieves sequence number

getTimestamp(self: depthai.Buffer)datetime.timedelta

Retrieves timestamp related to dai::Clock::now()

getTimestampDevice(self: depthai.Buffer)datetime.timedelta

Retrieves timestamp directly captured from device’s monotonic clock, not synchronized to host time. Used mostly for debugging

set(self: depthai.FeatureTrackerConfig, config: depthai.RawFeatureTrackerConfig)depthai.FeatureTrackerConfig

Set explicit configuration.

Parameter config:

Explicit configuration

setCornerDetector(*args, **kwargs)

Overloaded function.

  1. setCornerDetector(self: depthai.FeatureTrackerConfig, cornerDetector: depthai.RawFeatureTrackerConfig.CornerDetector.Type) -> depthai.FeatureTrackerConfig

Set corner detector algorithm type.

Parameter cornerDetector:

Corner detector type, HARRIS or SHI_THOMASI

  1. setCornerDetector(self: depthai.FeatureTrackerConfig, config: depthai.RawFeatureTrackerConfig.CornerDetector) -> depthai.FeatureTrackerConfig

Set corner detector full configuration.

Parameter config:

Corner detector configuration

setData(*args, **kwargs)

Overloaded function.

  1. setData(self: depthai.Buffer, arg0: List[int]) -> None

Parameter data:

Copies data to internal buffer

  1. setData(self: depthai.Buffer, arg0: numpy.ndarray[numpy.uint8]) -> None

Parameter data:

Copies data to internal buffer

setFeatureMaintainer(*args, **kwargs)

Overloaded function.

  1. setFeatureMaintainer(self: depthai.FeatureTrackerConfig, enable: bool) -> depthai.FeatureTrackerConfig

Enable or disable feature maintainer.

Parameter enable:

  1. setFeatureMaintainer(self: depthai.FeatureTrackerConfig, config: depthai.RawFeatureTrackerConfig.FeatureMaintainer) -> depthai.FeatureTrackerConfig

Set feature maintainer full configuration.

Parameter config:

feature maintainer configuration

setHwMotionEstimation(self: depthai.FeatureTrackerConfig)depthai.FeatureTrackerConfig

Set hardware accelerated motion estimation using block matching. Faster than optical flow (software implementation) but might not be as accurate.

setMotionEstimator(*args, **kwargs)

Overloaded function.

  1. setMotionEstimator(self: depthai.FeatureTrackerConfig, enable: bool) -> depthai.FeatureTrackerConfig

Enable or disable motion estimator.

Parameter enable:

  1. setMotionEstimator(self: depthai.FeatureTrackerConfig, config: depthai.RawFeatureTrackerConfig.MotionEstimator) -> depthai.FeatureTrackerConfig

Set motion estimator full configuration.

Parameter config:

Motion estimator configuration

setNumTargetFeatures(self: depthai.FeatureTrackerConfig, numTargetFeatures: int)depthai.FeatureTrackerConfig

Set number of target features to detect.

Parameter numTargetFeatures:

Number of features

setOpticalFlow(*args, **kwargs)

Overloaded function.

  1. setOpticalFlow(self: depthai.FeatureTrackerConfig) -> depthai.FeatureTrackerConfig

Set optical flow as motion estimation algorithm type.

  1. setOpticalFlow(self: depthai.FeatureTrackerConfig, config: depthai.RawFeatureTrackerConfig.MotionEstimator.OpticalFlow) -> depthai.FeatureTrackerConfig

Set optical flow full configuration.

Parameter config:

Optical flow configuration

setSequenceNum(self: depthai.Buffer, arg0: int)depthai.Buffer

Retrieves sequence number

setTimestamp(self: depthai.Buffer, arg0: datetime.timedelta)depthai.Buffer

Sets timestamp related to dai::Clock::now()

setTimestampDevice(self: depthai.Buffer, arg0: datetime.timedelta)depthai.Buffer

Sets timestamp related to dai::Clock::now()

class dai::FeatureTrackerConfig : public dai::Buffer

FeatureTrackerConfig message. Carries config for feature tracking algorithm

Public Types

using CornerDetector = RawFeatureTrackerConfig::CornerDetector
using MotionEstimator = RawFeatureTrackerConfig::MotionEstimator
using FeatureMaintainer = RawFeatureTrackerConfig::FeatureMaintainer

Public Functions

FeatureTrackerConfig()

Construct FeatureTrackerConfig message.

FeatureTrackerConfig(std::shared_ptr<RawFeatureTrackerConfig> ptr)
~FeatureTrackerConfig() = default
FeatureTrackerConfig &setCornerDetector(dai::FeatureTrackerConfig::CornerDetector::Type cornerDetector)

Set corner detector algorithm type.

Parameters
  • cornerDetector: Corner detector type, HARRIS or SHI_THOMASI

FeatureTrackerConfig &setCornerDetector(dai::FeatureTrackerConfig::CornerDetector config)

Set corner detector full configuration.

Parameters
  • config: Corner detector configuration

FeatureTrackerConfig &setOpticalFlow()

Set optical flow as motion estimation algorithm type.

FeatureTrackerConfig &setOpticalFlow(dai::FeatureTrackerConfig::MotionEstimator::OpticalFlow config)

Set optical flow full configuration.

Parameters
  • config: Optical flow configuration

FeatureTrackerConfig &setHwMotionEstimation()

Set hardware accelerated motion estimation using block matching. Faster than optical flow (software implementation) but might not be as accurate.

FeatureTrackerConfig &setNumTargetFeatures(std::int32_t numTargetFeatures)

Set number of target features to detect.

Parameters
  • numTargetFeatures: Number of features

FeatureTrackerConfig &setMotionEstimator(bool enable)

Enable or disable motion estimator.

Parameters
  • enable:

FeatureTrackerConfig &setMotionEstimator(dai::FeatureTrackerConfig::MotionEstimator config)

Set motion estimator full configuration.

Parameters
  • config: Motion estimator configuration

FeatureTrackerConfig &setFeatureMaintainer(bool enable)

Enable or disable feature maintainer.

Parameters
  • enable:

FeatureTrackerConfig &setFeatureMaintainer(dai::FeatureTrackerConfig::FeatureMaintainer config)

Set feature maintainer full configuration.

Parameters
  • config: feature maintainer configuration

FeatureTrackerConfig &set(dai::RawFeatureTrackerConfig config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawFeatureTrackerConfig get() const

Retrieve configuration data for FeatureTracker.

Return

config for feature tracking algorithm

Private Functions

std::shared_ptr<RawBuffer> serialize() const override

Private Members

RawFeatureTrackerConfig &cfg

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.