ON THIS PAGE

  • The Replay Node

The Replay Node

Use the rh.ColorReplayCamera node as a stand-in for the dai.node.ColorCamera node when you want to feed video files or image sequences into your app instead of live camera input. Simply point it to a mp4 file or a folder full of .jpg images and it'll handle the rest, piping those videos or images straight into your pipeline.This is super handy for developing, testing, or tweaking your app with pre-recorded footage or snapshots. Plus, if you've got an app running on Luxonis Hub, sending video events, you can loop those videos back in to refine, debug, or further test your app.Currently, it supports video, preview and out outputs, covering the basics pretty well. The video output offers NV12 frames for encoding, preview feeds BGR or RGB frames ideal for neural network inputs, and out delivers RAW8 grayscale images, just like you'd get from a MonoCamera.out output
Python
1rgb_sensor = rh.ColorReplayCamera(pipeline=pipeline, fps=rh.CONFIGURATION["fps"], src="video.mp4", run_in_loop=True)
2# when you have .jpg images stored in 'folder_with_jpg/' then you can
3rgb_sensor = rh.ColorReplayCamera(pipeline=pipeline, fps=rh.CONFIGURATION["fps"], src="folder_with_jpg/", run_in_loop=True)
4# create other pipeline nodes
5rgb_sensor.video.link(h264_encoder.input)
6rgb_sensor.preview.link(nn_node.input)
Substituting a MonoCamera node looks like this
Python
1mono_sensor = rh.MonoReplayCamera(pipeline=pipeline, fps=rh.CONFIGURATION["fps"], src="video.mp4", run_in_loop=True)
2mono_sensor_brg = pipeline.createImageManip()
3mono_sensor_brg.setFrameType(dai.RawImgFrame.Type.BGR888p)
4mono_sensor.out.link(mono_sensor_brg.inputImage)
5# create other pipeline nodes
6mono_sensor.out.link(h264_encoder.input)
7mono_sensor_bgr.out.link(nn_node.input)