# ObjectTracker

Object tracker tracks detected objects from the
[ImgDetections](https://docs.luxonis.com/software/depthai-components/messages/img_detections.md) using Kalman filter and hungarian
algorithm.

## How to place it

#### Python

```python
pipeline = dai.Pipeline()
objectTracker = pipeline.create(dai.node.ObjectTracker)
```

#### C++

```cpp
dai::Pipeline pipeline;
auto objectTracker = pipeline.create<dai::node::ObjectTracker>();
```

## Inputs and Outputs

## Zero term tracking

Zero term tracking performs object association, which means that it does not conduct prediction and tracking based on previous
tracking history. Object association would mean that detected objects from an external detector are mapped with tracked objects
which has been detected and is being tracked from previous frames.

## Short term tracking

Short-term tracking allows to track objects between frames, thereby reducing the need to run object detection on each frame. This
works great with NN models that can't achieve 30FPS (eg. [YoloV5](https://github.com/luxonis/oak-examples/tree/master/gen2-yolo));
tracker can provide tracklets when there was no inference, so the whole system can run at 30FPS.

## Supported object tracker types

 * SHORT_TERM_KCF: Kernelized Correlation Filter tracking. KCF utilizes properties of circulant matrix to enhance the processing
   speed. Paper [here](https://www.robots.ox.ac.uk/~joao/publications/henriques_tpami2015.pdf).
 * SHORT_TERM_IMAGELESS: Short-term tracking allows to track objects on frames where object detection was skipped, by
   extrapolating object trajectory from previous detections.
 * ZERO_TERM_COLOR_HISTOGRAM: Utilizes position, shape and input image information such as RGB histogram to perform object
   tracking.
 * ZERO_TERM_IMAGELESS: Only utilizes rectangular shape of detected object and position information for object tracking. It does
   not use color information of tracking objects. It achieves higher throughput than ZERO_TERM_COLOR_HISTOGRAM. User needs to
   consider the trade-off between throughput and accuracy when choosing the object tracker type.

A similar comparison of object trackers with more information can be found
[here](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Object-tracking#object-tracking-types).

## Maximum number of tracked objects

ObjectTracker node can track up to 60 objects at once. At the moment the firmware crashes if there are more than 60 objects to
track.

## Usage

#### Python

```python
pipeline = dai.Pipeline()
objectTracker = pipeline.create(dai.node.ObjectTracker)

objectTracker.setDetectionLabelsToTrack([15])  # Track only person
# Possible tracking types: ZERO_TERM_COLOR_HISTOGRAM, ZERO_TERM_IMAGELESS, SHORT_TERM_IMAGELESS, SHORT_TERM_KCF
objectTracker.setTrackerType(dai.TrackerType.ZERO_TERM_COLOR_HISTOGRAM)
# Take the smallest ID when new object is tracked, possible options: SMALLEST_ID, UNIQUE_ID
objectTracker.setTrackerIdAssignmentPolicy(dai.TrackerIdAssignmentPolicy.SMALLEST_ID)

# You have to use Object tracker in combination with detection network
# and an image frame source - mono/color camera or xlinkIn node
```

#### C++

```cpp
dai::Pipeline pipeline;
auto objectTracker = pipeline.create<dai::node::ObjectTracker>();

objectTracker->setDetectionLabelsToTrack({15});  // Track only person
// Possible tracking types: ZERO_TERM_COLOR_HISTOGRAM, ZERO_TERM_IMAGELESS, SHORT_TERM_IMAGELESS, SHORT_TERM_KCF
objectTracker->setTrackerType(dai::TrackerType::ZERO_TERM_COLOR_HISTOGRAM);
// Take the smallest ID when new object is tracked, possible options: SMALLEST_ID, UNIQUE_ID
objectTracker->setTrackerIdAssignmentPolicy(dai::TrackerIdAssignmentPolicy::SMALLEST_ID);

// You have to use Object tracker in combination with detection network
// and an image frame source - mono/color camera or xlinkIn node
```

## Examples of functionality

 * [Object tracker on RGB](https://docs.luxonis.com/software/depthai/examples/object_tracker.md)
 * [Spatial object tracker on RGB](https://docs.luxonis.com/software/depthai/examples/spatial_object_tracker.md)
 * [Object tracker on video](https://docs.luxonis.com/software/depthai/examples/object_tracker_video.md)

## Reference

### depthai.node.ObjectTracker(depthai.Node)

Kind: Class

ObjectTracker node. Performs object tracking using Kalman filter and hungarian
algorithm.

#### setDetectionLabelsToTrack(self, labels: collections.abc.Sequence [ typing.SupportsInt ])

Kind: Method

Specify detection labels to track.

Parameter ``labels``:
Detection labels to track. Default every label is tracked from image
detection network output.

#### setMaxObjectsToTrack(self, maxObjectsToTrack: typing.SupportsInt)

Kind: Method

Specify maximum number of object to track.

Parameter ``maxObjectsToTrack``:
Maximum number of object to track. Maximum 60 in case of SHORT_TERM_KCF,
otherwise 1000.

#### setOcclusionRatioThreshold(self, occlusionRatioThreshold: typing.SupportsFloat)

Kind: Method

Specify occlusion ratio threshold.

Parameter ``occlusionRatioThreshold``:
Occlusion ratio threshold. Used to filter out overlapping tracklets. Default
0.4.

#### setTrackerIdAssignmentPolicy(self, type: depthai.TrackerIdAssignmentPolicy)

Kind: Method

Specify tracker ID assignment policy.

Parameter ``type``:
Tracker ID assignment policy.

#### setTrackerThreshold(self, threshold: typing.SupportsFloat)

Kind: Method

Specify tracker threshold.

Parameter ``threshold``:
Above this threshold the detected objects will be tracked. Default 0, all
image detections are tracked.

#### setTrackerType(self, type: depthai.TrackerType)

Kind: Method

Specify tracker type algorithm.

Parameter ``type``:
Tracker type.

#### setTrackingPerClass(self, trackingPerClass: bool)

Kind: Method

Whether tracker should take into consideration class label for tracking.

#### setTrackletBirthThreshold(self, threshold: typing.SupportsInt)

Kind: Method

Specify tracklet birth threshold.

Parameter ``threshold``:
Tracklet birth threshold. Minimum consecutive tracked frames required to
consider a tracklet as a new instance. Default 3.

#### setTrackletMaxLifespan(self, lifespan: typing.SupportsInt)

Kind: Method

Specify tracklet lifespan.

Parameter ``lifespan``:
Tracklet lifespan in number of frames. Number of frames after which a LOST
tracklet is removed. Default 120.

#### inputConfig

Kind: Property

Input ObjectTrackerConfig message with ability to modify parameters at runtime.
Default queue is non-blocking with size 4.

#### inputDetectionFrame

Kind: Property

Input ImgFrame message on which object detection was performed. Default queue is
non-blocking with size 4.

#### inputDetections

Kind: Property

Input message with image detection from neural network. Default queue is non-
blocking with size 4.

#### inputTrackerFrame

Kind: Property

Input ImgFrame message on which tracking will be performed. RGBp, BGRp, NV12,
YUV420p types are supported. Default queue is non-blocking with size 4.

#### out

Kind: Property

Outputs Tracklets message that carries object tracking results.

#### passthroughDetectionFrame

Kind: Property

Passthrough ImgFrame message on which object detection was performed. Suitable
for when input queue is set to non-blocking behavior.

#### passthroughDetections

Kind: Property

Passthrough image detections message from neural network output. Suitable for
when input queue is set to non-blocking behavior.

#### passthroughTrackerFrame

Kind: Property

Passthrough ImgFrame message on which tracking was performed. Suitable for when
input queue is set to non-blocking behavior.

### Need assistance?

Head over to [Discussion Forum](https://discuss.luxonis.com/) for technical support or any other questions you might have.
