Packets

Packets are synchronized collections of one or more DepthAI messages. They are used internally for visualization and also forwarded to the callback function if the user:

  1. Specified a callback for visualizing of an output via OakCamera.visualize(..., callback=fn).

  2. Used callback output via OakCamera.callback(..., callback=fn, enable_visualizer=True).

API Usage

  1. OakCamera.visualize(): In the example below SDK won’t show the frame to the user, but instead it will send the packet to the callback function. SDK will draw detections (bounding boxes, labels) on the packet.frame.

  2. OakCamera.callback(): This will also send DetectionPacket to the callback function, the only difference is that the SDK won’t draw on the frame, so you can draw detections on the frame yourself.

Note

If you specify callback function in OakCamera.visualize(), you need to trigger drawing of detections yourself via Visualizer.draw() method.

import cv2
from depthai_sdk import OakCamera
from depthai_sdk.classes import DetectionPacket

with OakCamera() as oak:
    color = oak.create_camera('color')
    nn = oak.create_nn('mobilenet-ssd', color)

    # Callback
    def cb(packet: DetectionPacket):
        print(packet.img_detections)
        cv2.imshow(packet.name, packet.frame)

    # 1. Callback after visualization:
    oak.visualize(nn.out.main, fps=True, callback=cb)

    # 2. Callback:
    oak.callback(nn.out.main, callback=cb, enable_visualizer=True)

    oak.start(blocking=True)

Reference

FramePacket

class depthai_sdk.classes.packets.FramePacket(name, msg)

Contains only dai.ImgFrame message and cv2 frame, which is used by visualization logic.

property frame
get_timestamp()
Return type

datetime.timedelta

get_sequence_num()
Return type

int

set_decode_codec(get_codec)
Parameters

get_codec (Callable) –

decode()
Return type

Optional[numpy.ndarray]

get_size()
Return type

Tuple[int, int]

SpatialBbMappingPacket

class depthai_sdk.classes.packets.SpatialBbMappingPacket(name, msg, spatials, disp_scale_factor)

Output from Spatial Detection nodes - depth frame + bounding box mappings. Inherits FramePacket.

prepare_visualizer_objects(vis)

Prepare visualizer objects (boxes, lines, text, etc.), so visualizer can draw them on the frame.

Parameters
Return type

None

DetectionPacket

class depthai_sdk.classes.packets.DetectionPacket(name, msg, dai_msg, bbox)

Output from Detection Network nodes - image frame + image detections. Inherits FramePacket.

prepare_visualizer_objects(vis)

Prepare visualizer objects (boxes, lines, text, etc.), so visualizer can draw them on the frame.

Parameters
Return type

None

NNDataPacket

class depthai_sdk.classes.packets.NNDataPacket(name, nn_data)

Contains only dai.NNData message

get_timestamp()
Return type

datetime.timedelta

get_sequence_num()
Return type

int

DepthPacket

class depthai_sdk.classes.packets.DepthPacket(name, msg)

TrackerPacket

class depthai_sdk.classes.packets.TrackerPacket(name, msg, tracklets, bbox)

Output of Object Tracker node. Tracklets + Image frame. Inherits FramePacket.

prepare_visualizer_objects(visualizer)

Prepare visualizer objects (boxes, lines, text, etc.), so visualizer can draw them on the frame.

Parameters

visualizer (depthai_sdk.visualize.visualizer.Visualizer) – Visualizer object.

Return type

None

TwoStagePacket

class depthai_sdk.classes.packets.TwoStagePacket(name, msg, img_detections, nn_data, labels, bbox)

Output of 2-stage NN pipeline; Image frame, Image detections and multiple NNData results. Inherits DetectionPacket.

IMUPacket

class depthai_sdk.classes.packets.IMUPacket(name, packet, rotation=None)
get_imu_vals()

Returns imu values in a tuple. Returns in format (accelerometer_values, gyroscope_values, quaternion, magnetometer_values)

Return type

Tuple[Sequence, Sequence, Sequence, Sequence]

get_timestamp()
Return type

datetime.timedelta

get_sequence_num()
Return type

int

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.