• Robotics Vision Core 2 (RVC2)
  • RVC2 Performance
  • RVC2 NN Performance
  • NN Performance estimation
  • Power consumption
  • Hardware blocks and accelerators

Robotics Vision Core 2 (RVC2)

Robotics Vision Core 2 (RVC2 in short) is the second generation of our RVC. Series 2 OAK device and our initial devices are built on top of the RVC2.RVC2 encapsulates two main components:
  • DepthAI features that are fine-tuned for the particular SoC
  • A performant SoC and all it's support circuitry (HS PCB layout, power delivery network, efficient heat dissipation, etc.)

RVC2 Performance

RVC2 NN Performance

Click here for full table with 46 test results.
Model nameSizeFPSLatency [ms]
MobileOne S0224x224165.511.1
DeepLab V3256 x 25636.548.1
DeepLab V3513 x 5136.3253.1
YoloV6n R2416x41665.529.3
YoloV6n R2640x64029.366.4
YoloV6t R2416x41635.854.1
YoloV6t R2640x64014.2133.6
YoloV6m R2416x4168.6190.2
Models were compiled for 8 shaves and were using 2 NN inference threads. Latency includes getting results from device over USB3.5 iterations were run for each model and FPS was calculated as an average.

NN Performance estimation

You can estimate the performance of a model with the help of the chart below. It contains FPS estimations of models on RVC2 based on FLOPs and parameters.Click on the image to view a more detailed evaluation of FPS for common models.

Power consumption

The RVC2 itself has a maximum power consumption of about 4.5W, which is mainly consumed by the SoC, Movidius Myriad X, that is integrated inside the RVC2.

Hardware blocks and accelerators

The SoC has integrated a number of hardware accelerators, and DepthAI API has been designed to optimally utilize them:
  • 2xLeon CPU cores:
    • Leon CSS handles: USB/ethernet stack (managed by XLink framework), IMU, 3A algorithms. One way to reduce CSS CPU consumption would be to reduce the 3A rate by currently reducing camera FPS. We are also working on skipping 3A for some frames (eg. to only run 3A every 3rd frame). CSS CPU consumption is higher on POE models as it's running the ethernet stack.
    • Leon MSS handles everything else; scheduling HW accelerated features, using shaves, etc.
  • ISP - Image Signal Processor, used for image processing, such as denoising, sharpening, etc. The whole ISP configuration is exposed through API via ColorCamera node and MonoCamera node.
  • 2x NCEs (Neural Compute Engines) were architected for a slew of operations/layers, but there are some that aren't implemented, which are implemented on SHAVE cores.
  • 16x SHAVE cores - vector processors. Used for executing some NN operations/layers, they are versatile and can be used for other tasks as well, like CV algorithms (reformatting images, doing some ISP, etc.).
    • For higher resolutions more SHAVES are consumed; for 1080P, 3 SHAVES are used, and for 4K, 6 SHAVES are used.
    • Internal resource manager inside DepthAI coordinates the use of SHAVES, and warns if too many resources are requested by a given pipeline configuration.
  • 20x CMX slices - these are fast SRAM memory blocks (each 128kB) that are used for temporary storage of calculations. They are used by NN models, camera ISP (3 CMX slices for 1080P or below), image manipulations processes etc. Note that 4 CMX slices are pre-allocated, so there are only 16 free ones.
  • Stereo pipeline - Stereo matching (census transform, cost matching and cost aggregation) used by StereoDepth node.
  • Video encoder which supports MJPEG, H264 and H265 codecs. It's used by VideoEncoder node.
  • Vision blocks:
You can check the SHAVE and CMX by enabling debug information.