Improving Image Quality¶
There are a few ways to improve Image Quality (IQ) on OAK cameras. A few examples:
Changing Color camera ISP configuration
Try keeping camera sensitivity low - Low-light increased sensitivity
Camera tuning with custom tuning blobs
Ways to reduce Motion blur effects
Note that the Series 3 OAK cameras will also have temporal noise filter, which will improve IQ.
For best IQ, we suggest testing it yourself for your specific application. You can use RGB Camera Control to try out different ISP configurations and exposure/sensitivity values dynamically (live).
Color camera ISP configuration¶
You can configure ColorCamera ISP values such as
luma denoise and
chroma denoise, which
can improve IQ. We have noticed that sometimes these values provide better results:
camRgb = pipeline.create(dai.node.ColorCamera) camRgb.initialControl.setSharpness(0) # range: 0..4, default: 1 camRgb.initialControl.setLumaDenoise(0) # range: 0..4, default: 1 camRgb.initialControl.setChromaDenoise(4) # range: 0..4, default: 1
The zoomed-in image above showcases the IQ difference between ISP configuration (discussion here). Note that for best IQ, you would need to test and evaluate these values for your specific application.
On Wide FOV Series 2 cameras, you can select between wide FOV IMX378 and OV9782. In general, the IQ of OV9782 can’t be as good as say IMX378, as the resolution is much lower, and it’s harder to deal with sharpness/noise at low resolutions. With high resolution the image can be downscaled and noise would be less visible. And even though OV9782 has quite large pixels, in general the noise levels of global shutters are more significant than for rolling shutter.
Low-light increased sensitivity¶
On the image below you can see how different sensitivity values affect the IQ. Sensitivity will only add analog gain, which will increase the image noise. When in low-light environment, one should always increase exposure, not sensitivity. Note that by default, depthai will always do so - but when using 30FPS, max exposure is 33ms. For the right image below, we have set ColorCamera to 10 FPS, so we were able to increase exposure to 100ms.
About 15x digitally zoomed-in image of a standard A4 camera tuning target at 420cm (40 lux). We have used 12MP IMX378 (on OAK-D) for this image.
Our library supports setting camera IQ tuning blob, which would be used for all cameras. By default, cameras will use a general tuning blob, which works great in most cases - so changing the camera tuning blob is not needed for most cases.
import depthai as dai pipeline = dai.Pipeline() pipeline.setCameraTuningBlobPath('/path/to/tuning.bin')
Available tuning blobs
To tune your own camera sensors, one would need Intel’s software, for which a license is needed - so the majority of people will only be able to use pre-tuned blobs. Currently available tuning blobs:
Mono tuning for low-light environments here. This allows auto-exposure to go up to 200ms (otherwise limited with default tuning to 33ms). For 200ms auto-exposure, you also need to limit the FPS (
Color tuning for low-light environments here. Comparison below. This allows auto-exposure to go up to 100ms (otherwise limited with default tuning to 33ms). For 200ms auto-exposure, you also need to limit the FPS (
rgbCam.setFps(10)). Known limitation: flicker can be seen with auto-exposure over 33ms, it is caused by auto-focus working in continuous mode. A workaround is to change from CONTINUOUS_VIDEO (default) to AUTO (focusing only once at init, and on further focus trigger commands):
OV9782 Wide FOV color tuning for sunlight environments here. Fixes lens color filtering on direct sunglight, see blog post here. It also improves LSC (Lens Shading Correction). Currently doesn’t work for OV9282, so when used on eg. Series 2 OAK with Wide FOV cams, mono cameras shouldn’t be enabled.
Camera exposure limit: max 500us, max 8300us. These tuning blobs will limit the maximum exposure time, and instead start increasing ISO (sensitivity) after max exposure time is reached. This is a useful approach to reduce the Motion blur.
Motion blur appears when the camera shutter is opened for a longer time, and the object is moving during that time.
In the image above the right foot moved about 50 pixels during the exposure time, which results in a blurry image in that region. The left foot was on the ground the whole time of the exposure, so it’s not blurry.
In high-vibration environments we recommend using Fixed-Focus color camera, as otherwise the Auto-Focus lens will be shaking and cause blurry images (docs here).
Decrease the shutter (exposure) time - this will decrease the motion blur, but will also decrease the light that reaches the sensor, so the image will be darker. You could either use a larger sensor (so more photons hit the sensor) or use a higher ISO (sensitivity) value. One option to limit max exposure time is by using a Camera tuning blob.
If the motion blur negatively affects your model’s accuracy, you could fine-tune it to be more robust to motion blur by including motion blur images in your training dataset. Example video: