ON THIS PAGE

  • MobileNetSpatialDetectionNetwork
  • How to place it
  • Inputs and Outputs
  • Configuring Spatial Detection
  • Detection
  • Alignment
  • Scaling of BBOX
  • Calculation of spatials
  • Averaging methods
  • Common mistakes
  • Usage
  • Examples of functionality
  • Spatial coordinate system
  • Reference

MobileNetSpatialDetectionNetwork

Spatial detection for the MobileNet NN. It is similar to a combination of the MobileNetDetectionNetwork and SpatialLocationCalculator.

How to place it

Python
C++
Python
1pipeline = dai.Pipeline()
2mobilenetSpatial = pipeline.create(dai.node.MobileNetSpatialDetectionNetwork)

Inputs and Outputs

Command Line
1/
2                 ┌───────────────────┐
3  input          │                   │       passthrough
4  ──────────────►│-------------------├─────────────────►
5                 │     MobileNet     │               out
6                 │     Spatial       ├─────────────────►
7                 │     Detection     │boundingBoxMapping
8                 │     Network       ├─────────────────►
9  inputDepth     │                   │  passthroughDepth
10  ──────────────►│-------------------├─────────────────►
11                 └───────────────────┘
Message types

Configuring Spatial Detection

The pipeline of the SpatialDetectionNetwork node is described in the schema below:Spatial Detection node is essentially just an abstraction of the Detection Network (YoloDetectionNetwork and MobileNetDetectionNetwork) and the SpatialLocationCalculator.It works by linking the bounding boxes of each detected object to the spatial location calculator. The process goes as follows:
1

Detection

The Detection Network is responsible for detecting objects in the input frame. It outputs a list of detected objects, each represented by a bounding box, label and a confidence score.
2

Alignment

The depth map is aligned with the input frame. This is necessary because the DetectionNetwork operates on the input frame, while the SpatialLocationCalculator operates on the depth map.
3

Scaling of BBOX

The bounding box from the network is sent to SpatialLocationCalculator and is scaled according to BoundingBoxScaleFactor. This is done to ensure it includes the entire object. The bounding box is then used along with depth to calculate the spatial coordinates of the object.
4

Calculation of spatials

  • X and Y coordinates are taken from the bounding box center. They are calculated based of the offset from the center of the frame and the depth at that point.
  • For depth (Z), each pixel inside the scaled bounding box (ROI) is taken into account. This gives us a set of depth values, which are then averaged to get the final depth value.

Averaging methods

  • Average/mean: the average of ROI is used for calculation.
  • Min: the minimum value inside ROI is used for calculation.
  • Max: the maximum value inside ROI is used for calculation.
  • Mode: the most frequent value inside ROI is used for calculation.
  • Median: the median value inside ROI is used for calculation.
Default method is Median.

Common mistakes

Most mistakes stem from incorrect bounding box overlap. The scaled bounding box may include parts of the background, which can skew the depth calculation.
  • Thin objects (like a pole) may will have inaccurate spatials since only a small portion of the bounding box actually lies on the detected object. In such cases, it is best to use a smaller BoundingBoxScaleFactor if possible.
  • Objects with holes - hoops, rings, etc. To get the correct depth, the bounding box should include the entire object. Instead of median depth, use MIN depth method to exclude the background from calculation. Alternatively a depth threshold can be set to ignore the background in static environment.

Usage

Python
C++
Python
1pipeline = dai.Pipeline()
2mobilenetSpatial = pipeline.create(dai.node.MobileNetSpatialDetectionNetwork)
3
4mobilenetSpatial.setBlobPath(nnBlobPath)
5# Will ingore all detections whose confidence is below 50%
6mobilenetSpatial.setConfidenceThreshold(0.5)
7mobilenetSpatial.input.setBlocking(False)
8# How big the ROI will be (smaller value can provide a more stable reading)
9mobilenetSpatial.setBoundingBoxScaleFactor(0.5)
10# Min/Max threshold. Values out of range will be set to 0 (invalid)
11mobilenetSpatial.setDepthLowerThreshold(100)
12mobilenetSpatial.setDepthUpperThreshold(5000)
13
14# Link depth from the StereoDepth node
15stereo.depth.link(mobilenetSpatial.inputDepth)

Examples of functionality

Spatial coordinate system

OAK camera uses left-handed (Cartesian) coordinate system for all spatial coordiantes.

Reference

class

depthai.node.MobileNetSpatialDetectionNetwork(depthai.node.SpatialDetectionNetwork)

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.