DepthAI Tutorials
DepthAI API References

ON THIS PAGE

  • Casting NN Concatenation
  • Demo
  • Setup
  • Source code
  • Pipeline

Casting NN Concatenation

This example demonstrates how to concatenate frames from multiple cameras (RGB, left, and right) using a NeuralNetwork and the Cast node.

Demo

Setup

Please run the install script to download all required dependencies. Please note that this script must be ran from git context, so you have to download the depthai-python repository first and then run the script
Command Line
1git clone https://github.com/luxonis/depthai-python.git
2cd depthai-python/examples
3python3 install_requirements.py
For additional information, please follow the installation guide.

Source code

Python
C++

Python

Python
GitHub
1#!/usr/bin/env python3
2
3import numpy as np
4import cv2
5import depthai as dai
6from pathlib import Path
7
8SHAPE = 300
9
10p = dai.Pipeline()
11
12camRgb = p.create(dai.node.ColorCamera)
13left = p.create(dai.node.MonoCamera)
14right = p.create(dai.node.MonoCamera)
15manipLeft = p.create(dai.node.ImageManip)
16manipRight = p.create(dai.node.ImageManip)
17nn = p.create(dai.node.NeuralNetwork)
18cast = p.create(dai.node.Cast)
19castXout = p.create(dai.node.XLinkOut)
20
21camRgb.setPreviewSize(SHAPE, SHAPE)
22camRgb.setInterleaved(False)
23camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
24
25left.setCamera("left")
26left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
27manipLeft.initialConfig.setResize(SHAPE, SHAPE)
28manipLeft.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888p)
29
30right.setCamera("right")
31right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
32manipRight.initialConfig.setResize(SHAPE, SHAPE)
33manipRight.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888p)
34
35nnBlobPath = (Path(__file__).parent / Path('../models/concat_openvino_2021.4_6shave.blob')).resolve().absolute()
36nn.setBlobPath(nnBlobPath)
37nn.setNumInferenceThreads(2)
38
39castXout.setStreamName("cast")
40cast.setOutputFrameType(dai.ImgFrame.Type.BGR888p)
41
42# Linking
43left.out.link(manipLeft.inputImage)
44right.out.link(manipRight.inputImage)
45manipLeft.out.link(nn.inputs['img1'])
46camRgb.preview.link(nn.inputs['img2'])
47manipRight.out.link(nn.inputs['img3'])
48nn.out.link(cast.input)
49cast.output.link(castXout.input)
50
51# Pipeline is defined, now we can connect to the device
52with dai.Device(p) as device:
53    qCast = device.getOutputQueue(name="cast", maxSize=4, blocking=False)
54
55    while True:
56        inCast = qCast.get()
57        assert isinstance(inCast, dai.ImgFrame)
58        cv2.imshow("Concated frames", inCast.getCvFrame())
59
60        if cv2.waitKey(1) == ord('q'):
61            break

Pipeline

Need assistance?

Head over to Discussion Forum for technical support or any other questions you might have.