DepthAI Tutorials
DepthAI API References

ON THIS PAGE

  • Casting NN Concatenation
  • Demo
  • Setup
  • Source code
  • Pipeline

Casting NN Concatenation

This example demonstrates how to concatenate frames from multiple cameras (RGB, left, and right) using a NeuralNetwork and the Cast node.

Demo

Setup

Please run the install script to download all required dependencies. Please note that this script must be ran from git context, so you have to download the depthai-python repository first and then run the script
Command Line
1git clone https://github.com/luxonis/depthai-python.git
2cd depthai-python/examples
3python3 install_requirements.py
For additional information, please follow the installation guide.

Source code

Python
C++

Python

Python
GitHub
1import numpy as np
2import cv2
3import depthai as dai
4from pathlib import Path
5
6SHAPE = 300
7
8p = dai.Pipeline()
9
10camRgb = p.create(dai.node.ColorCamera)
11left = p.create(dai.node.MonoCamera)
12right = p.create(dai.node.MonoCamera)
13manipLeft = p.create(dai.node.ImageManip)
14manipRight = p.create(dai.node.ImageManip)
15nn = p.create(dai.node.NeuralNetwork)
16cast = p.create(dai.node.Cast)
17castXout = p.create(dai.node.XLinkOut)
18
19camRgb.setPreviewSize(SHAPE, SHAPE)
20camRgb.setInterleaved(False)
21camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
22
23left.setCamera("left")
24left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
25manipLeft.initialConfig.setResize(SHAPE, SHAPE)
26manipLeft.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888p)
27
28right.setCamera("right")
29right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
30manipRight.initialConfig.setResize(SHAPE, SHAPE)
31manipRight.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888p)
32
33nnBlobPath = (Path(__file__).parent / Path('../models/concat_openvino_2021.4_6shave.blob')).resolve().absolute()
34nn.setBlobPath(nnBlobPath)
35nn.setNumInferenceThreads(2)
36
37castXout.setStreamName("cast")
38cast.setOutputFrameType(dai.ImgFrame.Type.BGR888p)
39
40# Linking
41left.out.link(manipLeft.inputImage)
42right.out.link(manipRight.inputImage)
43manipLeft.out.link(nn.inputs['img1'])
44camRgb.preview.link(nn.inputs['img2'])
45manipRight.out.link(nn.inputs['img3'])
46nn.out.link(cast.input)
47cast.output.link(castXout.input)
48
49# Pipeline is defined, now we can connect to the device
50with dai.Device(p) as device:
51    qCast = device.getOutputQueue(name="cast", maxSize=4, blocking=False)
52
53    while True:
54        inCast = qCast.get()
55        assert isinstance(inCast, dai.ImgFrame)
56        cv2.imshow("Concated frames", inCast.getCvFrame())
57
58        if cv2.waitKey(1) == ord('q'):
59            break

Pipeline

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.