DetectionNetwork
DetectionNetwork node extends the NeuralNetwork by also performing on-camera parsing (decoding) step, which parses the NN output (byte array) into bounding boxes, labels, and confidence values, which are formatted into ImgDetections message.Detection Network currently supports YOLO and SSD NN output formats, and replacesYoloDetectionNetwork
and MobileNetDetectionNetwork
from DepthAI API v2.How to place it
Python
C++
Python
Python
1pipeline = dai.Pipeline()
2detection = pipeline.create(dai.node.DetectionNetwork)
Inputs and Outputs
outNetwork
outputs raw/unparsed NN output (byte array) from the NN, which could be used for custom parsing on the host.Usage
Python
C++
Python
Python
1pipeline = dai.Pipeline()
2camera = pipeline.create(dai.node.Camera).build()
3detection = pipeline.create(dai.node.DetectionNetwork).build(camera, dai.NNModelDescription("yolov6-nano"))
4detection.setConfidenceThreshold(0.5)
5detection.input.setBlocking(False)
Examples of functionality
RGB + Detection NetworkReference
class
dai::node::DetectionNetwork
variable
Subnode< NeuralNetwork > neuralNetwork
variable
Subnode< DetectionParser > detectionParser
variable
Output & out
Outputs ImgDetections message that carries parsed detection results. Overrides NeuralNetwork 'out' with ImgDetections output message type.
variable
Output & outNetwork
Outputs unparsed inference results.
variable
Input & input
Input message with data to be inferred upon Default queue is blocking with size 5
variable
Output & passthrough
Passthrough message on which the inference was performed.Suitable for when input queue is set to non-blocking behavior.
function
DetectionNetwork(const std::shared_ptr< Device > & device)
function
std::shared_ptr< DetectionNetwork > build(Node::Output & input, const NNArchive & nnArchive)
function
std::shared_ptr< DetectionNetwork > build(const std::shared_ptr< Camera > & input, NNModelDescription modelDesc, std::optional< float > fps)
function
std::shared_ptr< DetectionNetwork > build(const std::shared_ptr< Camera > & input, const NNArchive & nnArchive, std::optional< float > fps)
function
std::shared_ptr< DetectionNetwork > build(const std::shared_ptr< ReplayVideo > & input, NNModelDescription modelDesc, std::optional< float > fps)
function
std::shared_ptr< DetectionNetwork > build(const std::shared_ptr< ReplayVideo > & input, const NNArchive & nnArchive, std::optional< float > fps)
function
void setNNArchive(const NNArchive & nnArchive)
function
void setNNArchive(const NNArchive & nnArchive, int numShaves)
function
void setFromModelZoo(NNModelDescription description, bool useCached)
function
void setFromModelZoo(NNModelDescription description, int numShaves, bool useCached)
function
void setBlobPath(const dai::Path & path)
Load network blob into assets and use once pipeline is started.
Parameters
- Error: if file doesn't exist or isn't a valid network blob.
Parameters
- path: Path to network blob
function
void setBlob(OpenVINO::Blob blob)
Load network blob into assets and use once pipeline is started.
Parameters
- blob: Network blob
function
void setBlob(const dai::Path & path)
Same functionality as the setBlobPath(). Load network blob into assets and use once pipeline is started.
Parameters
- Error: if file doesn't exist or isn't a valid network blob.
Parameters
- path: Path to network blob
function
void setModelPath(const dai::Path & modelPath)
Load network model into assets.
Parameters
- modelPath: Path to the model file.
function
void setNumPoolFrames(int numFrames)
Specifies how many frames will be available in the pool
Parameters
- numFrames: How many frames will pool have
function
void setNumInferenceThreads(int numThreads)
How many threads should the node use to run the network.
Parameters
- numThreads: Number of threads to dedicate to this node
function
void setNumNCEPerInferenceThread(int numNCEPerThread)
How many Neural Compute Engines should a single thread use for inference
Parameters
- numNCEPerThread: Number of NCE per thread
function
void setNumShavesPerInferenceThread(int numShavesPerThread)
How many Shaves should a single thread use for inference
Parameters
- numShavesPerThread: Number of shaves per thread
function
void setBackend(std::string backend)
Specifies backend to use
Parameters
- backend: String specifying backend to use
function
void setBackendProperties(std::map< std::string, std::string > properties)
Set backend properties
Parameters
- backendProperties: backend properties map
function
int getNumInferenceThreads()
How many inference threads will be used to run the network
Returns
Number of threads, 0, 1 or 2. Zero means AUTO
function
void setConfidenceThreshold(float thresh)
Specifies confidence threshold at which to filter the rest of the detections.
Parameters
- thresh: Detection confidence must be greater than specified threshold to be added to the list
function
float getConfidenceThreshold()
Retrieves threshold at which to filter the rest of the detections.
Returns
Detection confidence
function
std::vector< std::pair< Input &, std::shared_ptr< Capability > > > getRequiredInputs()
function
std::optional< std::vector< std::string > > getClasses()
function
void buildInternal()
Need assistance?
Head over to Discussion Forum for technical support or any other questions you might have.