ObjectTracker

Object tracker tracks detected objects from the ImgDetections using Kalman filter and hungarian algorithm.

How to place it

pipeline = dai.Pipeline()
objectTracker = pipeline.create(dai.node.ObjectTracker)
dai::Pipeline pipeline;
auto objectTracker = pipeline.create<dai::node::ObjectTracker>();

Inputs and Outputs

                    ┌───────────────────┐
inputDetectionFrame │                   │passthroughDetectionFrame
───────────────────►│-------------------├─────────────────────────►
                    │                   │                      out
                    │      Object       ├─────────────────────────►
inputTrackerFrame   │      Tracker      │  passthroughTrackerFrame
───────────────────►│-------------------├─────────────────────────►
inputDetections     │                   │    passthroughDetections
───────────────────►│-------------------├─────────────────────────►
                    └───────────────────┘

Message types

Zero term tracking

Zero term tracking performs object association, which means that it does not conduct prediction and tracking based on previous tracking history. Object association would mean that detected objects from an external detector are mapped with tracked objects which has been detected and is being tracked from previous frames.

Supported object tracker types:

  • ZERO_TERM_COLOR_HISTOGRAM: Utilizes position, shape and input image information such as RGB histogram to perform object tracking.

  • ZERO_TERM_IMAGELESS: Only utilizes rectangular shape of detected object and position information for object tracking. It does not use color information of tracking objects. It achieves higher throughput than ZERO_TERM_COLOR_HISTOGRAM. User needs to consider the trade-off between throughput and accuracy when choosing the object tracker type.

Usage

pipeline = dai.Pipeline()
objectTracker = pipeline.create(dai.node.ObjectTracker)

objectTracker.setDetectionLabelsToTrack([15])  # Track only person
# Possible tracking types: ZERO_TERM_COLOR_HISTOGRAM, ZERO_TERM_IMAGELESS
objectTracker.setTrackerType(dai.TrackerType.ZERO_TERM_COLOR_HISTOGRAM)
# Take the smallest ID when new object is tracked, possible options: SMALLEST_ID, UNIQUE_ID
objectTracker.setTrackerIdAssigmentPolicy(dai.TrackerIdAssigmentPolicy.SMALLEST_ID)

# You have to use Object tracker in combination with detection network
# and an image frame source - mono/color camera or xlinkIn node
dai::Pipeline pipeline;
auto objectTracker = pipeline.create<dai::node::ObjectTracker>();

objectTracker->setDetectionLabelsToTrack({15});  // Track only person
// Possible tracking types: ZERO_TERM_COLOR_HISTOGRAM, ZERO_TERM_IMAGELESS
objectTracker->setTrackerType(dai::TrackerType::ZERO_TERM_COLOR_HISTOGRAM);
// Take the smallest ID when new object is tracked, possible options: SMALLEST_ID, UNIQUE_ID
objectTracker->setTrackerIdAssigmentPolicy(dai::TrackerIdAssigmentPolicy::SMALLEST_ID);

// You have to use Object tracker in combination with detection network
// and an image frame source - mono/color camera or xlinkIn node

Reference

class depthai.node.ObjectTracker

ObjectTracker node. Performs object tracking using Kalman filter and hungarian algorithm.

class Connection

Connection between an Input and Output

class Id

Node identificator. Unique for every node on a single Pipeline

Properties

alias of depthai.ObjectTrackerProperties

getAssetManager(*args, **kwargs)

Overloaded function.

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

Get node AssetManager as a const reference

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

Get node AssetManager as a const reference

getInputRefs(*args, **kwargs)

Overloaded function.

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

Retrieves reference to node inputs

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

Retrieves reference to node inputs

getInputs(self: depthai.Node) → List[depthai.Node.Input]

Retrieves all nodes inputs

getName(self: depthai.Node)str

Retrieves nodes name

getOutputRefs(*args, **kwargs)

Overloaded function.

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

Retrieves reference to node outputs

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

Retrieves reference to node outputs

getOutputs(self: depthai.Node) → List[depthai.Node.Output]

Retrieves all nodes outputs

getParentPipeline(*args, **kwargs)

Overloaded function.

  1. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

  2. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

property id

Id of node

property inputDetectionFrame

Input ImgFrame message on which object detection was performed. Default queue is non-blocking with size 4.

property inputDetections

Input message with image detection from neural network. Default queue is non- blocking with size 4.

property inputTrackerFrame

Input ImgFrame message on which tracking will be performed. RGBp, BGRp, NV12, YUV420p types are supported. Default queue is non-blocking with size 4.

property out

Outputs Tracklets message that carries object tracking results.

property passthroughDetectionFrame

Passthrough ImgFrame message on which object detection was performed. Suitable for when input queue is set to non-blocking behavior.

property passthroughDetections

Passthrough image detections message from neural nework output. Suitable for when input queue is set to non-blocking behavior.

property passthroughTrackerFrame

Passthrough ImgFrame message on which tracking was performed. Suitable for when input queue is set to non-blocking behavior.

setDetectionLabelsToTrack(self: depthai.node.ObjectTracker, labels: List[int])None

Specify detection labels to track.

Parameter labels:

Detection labels to track. Default every label is tracked from image detection network output.

setMaxObjectsToTrack(self: depthai.node.ObjectTracker, maxObjectsToTrack: int)None

Specify maximum number of object to track.

Parameter maxObjectsToTrack:

Maximum number of object to track. Maximum 60.

setTrackerIdAssigmentPolicy(self: depthai.node.ObjectTracker, type: depthai.TrackerIdAssigmentPolicy)None

Specify tracker ID assigment policy.

Parameter type:

Tracker ID assigment policy.

setTrackerThreshold(self: depthai.node.ObjectTracker, threshold: float)None

Specify tracker threshold.

Parameter threshold:

Above this threshold the detected objects will be tracked. Default 0, all image detections are tracked.

setTrackerType(self: depthai.node.ObjectTracker, type: depthai.TrackerType)None

Specify tracker type algorithm.

Parameter type:

Tracker type.

class dai::node::ObjectTracker : public dai::Node

ObjectTracker node. Performs object tracking using Kalman filter and hungarian algorithm.

Public Types

using Properties = dai::ObjectTrackerProperties

Public Functions

std::string getName() const override

Retrieves nodes name.

ObjectTracker(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)
void setTrackerThreshold(float threshold)

Specify tracker threshold.

Parameters
  • threshold: Above this threshold the detected objects will be tracked. Default 0, all image detections are tracked.

void setMaxObjectsToTrack(std::int32_t maxObjectsToTrack)

Specify maximum number of object to track.

Parameters
  • maxObjectsToTrack: Maximum number of object to track. Maximum 60.

void setDetectionLabelsToTrack(std::vector<std::uint32_t> labels)

Specify detection labels to track.

Parameters
  • labels: Detection labels to track. Default every label is tracked from image detection network output.

void setTrackerType(TrackerType type)

Specify tracker type algorithm.

Parameters
  • type: Tracker type.

void setTrackerIdAssigmentPolicy(TrackerIdAssigmentPolicy type)

Specify tracker ID assigment policy.

Parameters
  • type: Tracker ID assigment policy.

Public Members

Input inputTrackerFrame = {*this, "inputTrackerFrame", Input::Type::SReceiver, false, 4, {{DatatypeEnum::ImgFrame, false}}}

Input ImgFrame message on which tracking will be performed. RGBp, BGRp, NV12, YUV420p types are supported. Default queue is non-blocking with size 4.

Input inputDetectionFrame = {*this, "inputDetectionFrame", Input::Type::SReceiver, false, 4, {{DatatypeEnum::ImgFrame, false}}}

Input ImgFrame message on which object detection was performed. Default queue is non-blocking with size 4.

Input inputDetections = {*this, "inputDetections", Input::Type::SReceiver, false, 4, {{DatatypeEnum::ImgDetections, true}}}

Input message with image detection from neural network. Default queue is non-blocking with size 4.

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::Tracklets, false}}}

Outputs Tracklets message that carries object tracking results.

Output passthroughTrackerFrame = {*this, "passthroughTrackerFrame", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough ImgFrame message on which tracking was performed. Suitable for when input queue is set to non-blocking behavior.

Output passthroughDetectionFrame = {*this, "passthroughDetectionFrame", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough ImgFrame message on which object detection was performed. Suitable for when input queue is set to non-blocking behavior.

Output passthroughDetections = {*this, "passthroughDetections", Output::Type::MSender, {{DatatypeEnum::ImgDetections, true}}}

Passthrough image detections message from neural nework output. Suitable for when input queue is set to non-blocking behavior.

Private Functions

nlohmann::json getProperties() override
std::shared_ptr<Node> clone() override

Private Members

Properties properties

Got questions?

We’re always happy to help with code or other questions you might have.