# FeatureTrackerConfig

This message is used to configure the
[FeatureTracker](https://docs.luxonis.com/software-v3/depthai/examples/feature_tracker/feature_tracker.md) node. You can set the
CornerDetector, FeatureMaintainer and MotionEstimator.

## Reference

### dai::FeatureTrackerConfig

Kind: class

FeatureTrackerConfig message. Carries config for feature tracking algorithm

#### dai::FeatureTrackerConfig::CornerDetector

Kind: struct

Corner detector configuration structure.

##### dai::FeatureTrackerConfig::CornerDetector::Thresholds

Kind: struct

Threshold settings structure for corner detector.

###### float initialValue

Kind: variable

Minimum strength of a feature which will be detected. 0 means automatic threshold update. Recommended so the tracker can adapt to
different scenes/textures. Each cell has its own threshold. Empirical value.

###### float min

Kind: variable

Minimum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto, 6000000 for HARRIS, 1200 for
SHI_THOMASI. Empirical value.

###### float max

Kind: variable

Maximum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto. Empirical value.

###### float decreaseFactor

Kind: variable

When detected number of features exceeds the maximum in a cell threshold is lowered by multiplying its value with this factor.

###### float increaseFactor

Kind: variable

When detected number of features doesn't exceed the maximum in a cell, threshold is increased by multiplying its value with this
factor.

###### DEPTHAI_SERIALIZE(Thresholds, initialValue, min, max, decreaseFactor, increaseFactor)

Kind: function

##### std::int32_t Type

Kind: enum

###### HARRIS

Kind: enum_value

Harris corner detector.

###### SHI_THOMASI

Kind: enum_value

Shi-Thomasi corner detector.

##### Type type

Kind: variable

Corner detector algorithm type.

##### std::int32_t cellGridDimension

Kind: variable

Ensures distributed feature detection across the image. Image is divided into horizontal and vertical cells, each cell has a
target feature count = numTargetFeatures / cellGridDimension. Each cell has its own feature threshold. A value of 4 means that the
image is divided into 4x4 cells of equal width/height. Maximum 4, minimum 1.

##### std::int32_t numTargetFeatures

Kind: variable

Target number of features to detect. Maximum number of features is determined at runtime based on algorithm type.

##### std::int32_t numMaxFeatures

Kind: variable

Hard limit for the maximum number of features that can be detected. 0 means auto, will be set to the maximum value based on memory
constraints.

##### bool enableSobel

Kind: variable

Enable 3x3 Sobel operator to smoothen the image whose gradient is to be computed. If disabled, a simple 1D row/column
differentiator is used for gradient.

##### bool enableSorting

Kind: variable

Enable sorting detected features based on their score or not.

##### Thresholds thresholds

Kind: variable

Threshold settings. These are advanced settings, suitable for debugging/special cases.

##### DEPTHAI_SERIALIZE(CornerDetector, type, cellGridDimension, numTargetFeatures, numMaxFeatures, thresholds, enableSobel,
enableSorting)

Kind: function

#### dai::FeatureTrackerConfig::FeatureMaintainer

Kind: struct

FeatureMaintainer configuration structure.

##### bool enable

Kind: variable

Enable feature maintaining or not.

##### float minimumDistanceBetweenFeatures

Kind: variable

Used to filter out detected feature points that are too close. Requires sorting enabled in detector. Unit of measurement is
squared euclidean distance in pixels.

##### float lostFeatureErrorThreshold

Kind: variable

Optical flow measures the tracking error for every feature. If the point can’t be tracked or it’s out of the image it will set
this error to a maximum value. This threshold defines the level where the tracking accuracy is considered too bad to keep the
point.

##### float trackedFeatureThreshold

Kind: variable

Once a feature was detected and we started tracking it, we need to update its Harris score on each image. This is needed because a
feature point can disappear, or it can become too weak to be tracked. This threshold defines the point where such a feature must
be dropped. As the goal of the algorithm is to provide longer tracks, we try to add strong points and track them until they are
absolutely untrackable. This is why, this value is usually smaller than the detection threshold.

##### DEPTHAI_SERIALIZE(FeatureMaintainer, enable, minimumDistanceBetweenFeatures, lostFeatureErrorThreshold,
trackedFeatureThreshold)

Kind: function

#### dai::FeatureTrackerConfig::MotionEstimator

Kind: struct

Used for feature reidentification between current and previous features.

##### dai::FeatureTrackerConfig::MotionEstimator::OpticalFlow

Kind: struct

Optical flow configuration structure.

###### std::int32_t pyramidLevels

Kind: variable

Number of pyramid levels, only for optical flow. AUTO means it's decided based on input resolution: 3 if image width <= 640, else
4. Valid values are either 3/4 for VGA, 4 for 720p and above.

###### std::int32_t searchWindowWidth

Kind: variable

Image patch width used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at
most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime

###### std::int32_t searchWindowHeight

Kind: variable

Image patch height used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at
most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime

###### float epsilon

Kind: variable

Feature tracking termination criteria. Optical flow will refine the feature position on each pyramid level until the displacement
between two refinements is smaller than this value. Decreasing this number increases runtime.

###### std::int32_t maxIterations

Kind: variable

Feature tracking termination criteria. Optical flow will refine the feature position maximum this many times on each pyramid
level. If the Epsilon criteria described in the previous chapter is not met after this number of iterations, the algorithm will
continue with the current calculated value. Increasing this number increases runtime.

###### DEPTHAI_SERIALIZE(OpticalFlow, pyramidLevels, searchWindowWidth, searchWindowHeight, epsilon, maxIterations)

Kind: function

##### std::int32_t Type

Kind: enum

###### LUCAS_KANADE_OPTICAL_FLOW

Kind: enum_value

Using the pyramidal Lucas-Kanade optical flow method.

###### HW_MOTION_ESTIMATION

Kind: enum_value

Using a dense motion estimation hardware block (Block matcher).

##### bool enable

Kind: variable

Enable motion estimation or not.

##### Type type

Kind: variable

Motion estimator algorithm type.

##### OpticalFlow opticalFlow

Kind: variable

Optical flow configuration. Takes effect only if MotionEstimator algorithm type set to LUCAS_KANADE_OPTICAL_FLOW.

##### DEPTHAI_SERIALIZE(MotionEstimator, enable, type, opticalFlow)

Kind: function

#### CornerDetector cornerDetector

Kind: variable

Corner detector configuration. Used for feature detection.

#### MotionEstimator motionEstimator

Kind: variable

Motion estimator configuration. Used for feature reidentification between current and previous features.

#### FeatureMaintainer featureMaintainer

Kind: variable

FeatureMaintainer configuration. Used for feature maintaining.

#### FeatureTrackerConfig()

Kind: function

Construct FeatureTrackerConfig message.

#### ~FeatureTrackerConfig()

Kind: function

#### FeatureTrackerConfig & setCornerDetector(CornerDetector::Type cornerDetector)

Kind: function

Set corner detector algorithm type. parameters: cornerDetector: Corner detector type, HARRIS or SHI_THOMASI

#### FeatureTrackerConfig & setCornerDetector(CornerDetector config)

Kind: function

Set corner detector full configuration. parameters: config: Corner detector configuration

#### FeatureTrackerConfig & setOpticalFlow()

Kind: function

Set optical flow as motion estimation algorithm type.

#### FeatureTrackerConfig & setOpticalFlow(MotionEstimator::OpticalFlow config)

Kind: function

Set optical flow full configuration. parameters: config: Optical flow configuration

#### FeatureTrackerConfig & setHwMotionEstimation()

Kind: function

Set hardware accelerated motion estimation using block matching. Faster than optical flow (software implementation) but might not
be as accurate.

#### FeatureTrackerConfig & setNumTargetFeatures(std::int32_t numTargetFeatures)

Kind: function

Set number of target features to detect. parameters: numTargetFeatures: Number of features

#### FeatureTrackerConfig & setMotionEstimator(bool enable)

Kind: function

Enable or disable motion estimator. parameters: enable:

#### FeatureTrackerConfig & setMotionEstimator(MotionEstimator config)

Kind: function

Set motion estimator full configuration. parameters: config: Motion estimator configuration

#### FeatureTrackerConfig & setFeatureMaintainer(bool enable)

Kind: function

Enable or disable feature maintainer. parameters: enable:

#### FeatureTrackerConfig & setFeatureMaintainer(FeatureMaintainer config)

Kind: function

Set feature maintainer full configuration. parameters: config: feature maintainer configuration

#### void serialize(std::vector< std::uint8_t > & metadata, DatatypeEnum & datatype)

Kind: function

#### DatatypeEnum getDatatype()

Kind: function

Get the datatype of this specific message.

return: DatatypeEnum

#### DEPTHAI_SERIALIZE(FeatureTrackerConfig, cornerDetector, motionEstimator, featureMaintainer)

Kind: function

### Need assistance?

Head over to [Discussion Forum](https://discuss.luxonis.com/) for technical support or any other questions you might have.
