Nodes are the building blocks when populating the Pipeline. Each node provides a specific functionality on the DepthAI, a set of configurable properties and inputs/outputs. After you create a node on a pipeline, you can also configure it as desired and link it to other nodes.
On the table of contents (left side of the page) all nodes are listed under the
Node entry. You can click on them to find out more.
Inputs and outputs
Each node can have zero, one or multiple inputs and outputs. For example SystemLogger node has no inputs and 1 output and EdgeDetector has 2 inputs and 1 output, as shown below. Script node can have any number of inputs/ouputs.
┌───────────────────┐ inputImage │ │ ──────────────►│ │ │ │ outputImage │ EdgeDetector ├───────────► inputConfig │ │ ──────────────►│ │ │ │ └───────────────────┘ EdgeDetector node has 2 inputs and 1 output
Node input queue is a queue for Messages. It can be linked with other node’s output (that’s how you link up nodes). Node inputs are
configurable - with
If the input queue fills up, behavior of the input depends on blocking attribute.
Let’s say we have linked ColorCamera
preview output with NeuralNetwork
┌─────────────┐ ┌───────────────┐ │ │ │ │ │ │ preview input │ │ │ ColorCamera ├───────────────────►│ NeuralNetwork │ │ │ [ImgFrame] │ │ │ │ │ │ └─────────────┘ └───────────────┘
If input is set to blocking mode, and input queue fills up, no new messages from ColorCamera will be able to enter the input queue. This means ColorCamera will block and wait with sending its messages until it can push the message to the queue of NeuralNetwork input. If ColorCamera preview is connected to multiple inputs, the same behavior implies, with the messages being pushed sequentially to each input.
Depending on pipeline configuration, this can sometimes lead to pipeline freezing, if some blocking input isn’t being properly consumed.
If blocking is disabled, new messages will push out old messages. This eliminates the risk of pipeline freezing, but can result in dropped messages (eg. ImgFrame).
Node outputs Messages. Some nodes have a configurable output message pool. Output message pool is a reserved memory region (to reduce memory fragmentation) that holds output messages. After the node creates an output message (for example ImgFrame), it will send it to other nodes as specified when linking the inputs/outputs of the node. Currently, some nodes (VideoEncoder, NeuralNetwork, ImageManip, XLinkIn) can have the pool size configured. The size of the pool specifies how many messages can be created and sent out while other messages are already somewhere in the pipeline.
When all the messages from pool are sent out and none yet returned, that’s when the node will block (freeze) and wait until a message is released (not used by any node in the pipeline).
We’re always happy to help with code or other questions you might have.