FeatureTracker

FeatureTracker detects key points (features) on a frame and tracks them on the next frame. The valid features are obtained from the Harris score or Shi-Tomasi. The default number of target features is 320 and the default maximum number of features is 480. It supports 720p and 480p resolutions.

How to place it

pipeline = dai.Pipeline()
featureTracker = pipeline.create(dai.node.FeatureTracker)
dai::Pipeline pipeline;
auto featureTracker = pipeline.create<dai::node::FeatureTracker>();

Inputs and Outputs

             ┌─────────────────┐
inputConfig  │                 │       outputFeatures
────────────►│     Feature     ├────────────────────►
inputImage   │     Tracker     │passthroughInputImage
────────────►│-----------------├────────────────────►
             └─────────────────┘

Message types

Usage

pipeline = dai.Pipeline()
featureTracker = pipeline.create(dai.node.FeatureTracker)

# Set number of shaves and number of memory slices to maximum
featureTracker.setHardwareResources(2, 2)
# Specify to wait until configuration message arrives to inputConfig Input.
featureTracker.setWaitForConfigInput(True)

# You have to use Feature tracker in combination with
# an image frame source - mono/color camera or xlinkIn node
dai::Pipeline pipeline;
auto featureTracker = pipeline.create<dai::node::FeatureTracker>();

// Set number of shaves and number of memory slices to maximum
featureTracker->setHardwareResources(2, 2);
// Specify to wait until configuration message arrives to inputConfig Input.
featureTracker->setWaitForConfigInput(true);

// You have to use Feature tracker in combination with
// an image frame source - mono/color camera or xlinkIn node

Examples of functionality

Reference

class depthai.node.FeatureTracker

FeatureTracker node. Performs feature tracking and reidentification using motion estimation between 2 consecutive frames.

class Connection

Connection between an Input and Output

class Id

Node identificator. Unique for every node on a single Pipeline

Properties

alias of depthai.FeatureTrackerProperties

getAssetManager(*args, **kwargs)

Overloaded function.

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

Get node AssetManager as a const reference

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

Get node AssetManager as a const reference

getInputRefs(*args, **kwargs)

Overloaded function.

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

Retrieves reference to node inputs

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

Retrieves reference to node inputs

getInputs(self: depthai.Node) → List[depthai.Node.Input]

Retrieves all nodes inputs

getName(self: depthai.Node)str

Retrieves nodes name

getOutputRefs(*args, **kwargs)

Overloaded function.

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

Retrieves reference to node outputs

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

Retrieves reference to node outputs

getOutputs(self: depthai.Node) → List[depthai.Node.Output]

Retrieves all nodes outputs

getParentPipeline(*args, **kwargs)

Overloaded function.

  1. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

  2. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

getWaitForConfigInput(self: depthai.node.FeatureTracker)bool

See also

setWaitForConfigInput

Returns

True if wait for inputConfig message, false otherwise

property id

Id of node

property initialConfig

Initial config to use for feature tracking.

property inputConfig

Input FeatureTrackerConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

property inputImage

Input message with frame data on which feature tracking is performed. Default queue is non-blocking with size 4.

property outputFeatures

Outputs TrackedFeatures message that carries tracked features results.

property passthroughInputImage

Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.

setHardwareResources(self: depthai.node.FeatureTracker, numShaves: int, numMemorySlices: int)None

Specify allocated hardware resources for feature tracking. 2 shaves/memory slices are required for optical flow, 1 for corner detection only.

Parameter numShaves:

Number of shaves. Maximum 2.

Parameter numMemorySlices:

Number of memory slices. Maximum 2.

setWaitForConfigInput(self: depthai.node.FeatureTracker, wait: bool)None

Specify whether or not wait until configuration message arrives to inputConfig Input.

Parameter wait:

True to wait for configuration message, false otherwise.

class dai::node::FeatureTracker : public dai::NodeCRTP<Node, FeatureTracker, FeatureTrackerProperties>

FeatureTracker node. Performs feature tracking and reidentification using motion estimation between 2 consecutive frames.

Public Functions

FeatureTracker(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)
FeatureTracker(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId, std::unique_ptr<Properties> props)
void setWaitForConfigInput(bool wait)

Specify whether or not wait until configuration message arrives to inputConfig Input.

Parameters
  • wait: True to wait for configuration message, false otherwise.

bool getWaitForConfigInput() const

See

setWaitForConfigInput

Return

True if wait for inputConfig message, false otherwise

void setHardwareResources(int numShaves, int numMemorySlices)

Specify allocated hardware resources for feature tracking. 2 shaves/memory slices are required for optical flow, 1 for corner detection only.

Parameters
  • numShaves: Number of shaves. Maximum 2.

  • numMemorySlices: Number of memory slices. Maximum 2.

Public Members

FeatureTrackerConfig initialConfig

Initial config to use for feature tracking.

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 4, {{DatatypeEnum::FeatureTrackerConfig, false}}}

Input FeatureTrackerConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

Input inputImage = {*this, "inputImage", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgFrame, false}}}

Input message with frame data on which feature tracking is performed. Default queue is non-blocking with size 4.

Output outputFeatures = {*this, "outputFeatures", Output::Type::MSender, {{DatatypeEnum::TrackedFeatures, false}}}

Outputs TrackedFeatures message that carries tracked features results.

Output passthroughInputImage = {*this, "passthroughInputImage", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.

Public Static Attributes

static constexpr const char *NAME = "FeatureTracker"

Private Members

std::shared_ptr<RawFeatureTrackerConfig> rawConfig

Image cells

To have features all around the image, it is divided into cells which are then processed separately. Each cell has a target feature count = frame target features / number of cells. The number of cells can be configured in horizontal and in vertical direction. The default number of cells is 4 (horizontal) x 4 (vertical). This means that the default number of target features per cell is: 320 / (4 * 4) = 20. Note that if an already tracked point happens to have its new coordinate in a full cell, it will not be removed, therefore number of features can exceed this limit.

Initial Harris Threshold

This threshold controls the minimum strength of a feature which will be detected. Setting this value to 0 enables the automatic thresholds, which are used to adapt to different scenes. If there is a lot of texture, this value needs to be increased, to limit the number of detected points. Each cell has its own threshold.

Entry conditions for new features

The entry conditions for new features are:

  • features must not be too close to each other (minimum distance criteria - default value is 50, the unit of measurement being squared euclidean distance in pixels),

  • Harris score of the feature is high enough,

  • there is enough room in the cell for the feature (target feature count is not achieved).

Harris Threshold for Tracked Features

Once a feature was detected and we started tracking it, we need to update its Harris score on each image. This threshold defines the point where such a feature must be removed. The goal is to track points for as long as possible, so the conditoins for entry points are higher than the ones for already tracked points. This is why, this value is usually smaller than the detection threshold.

Feature Maintenance

The algorithm has to decide which feature will be removed and which will be kept in the subsequent frames. Note that tracked features have priority over new features. It will remove the features which:

  • have too large tracking error (wasn’t tracked correctly),

  • have too small Harris score (configurable threshold).

New position calculation

A position of the previous features on the current frame can be calculated in two ways:

  1. Using the pyramidal Lucas-Kanade optical flow method.

  2. Using a dense motion estimation hardware block (Block matcher).

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.