# Image Quality

Image Quality (IQ) is a measure of how well the image represents the original scene. It's a combination of many factors, such as
sharpness, noise, color accuracy, and more.

There are a few ways to improve Image Quality (IQ) on OAK cameras. A few examples:

 1. Changing [Color camera ISP configuration](#Image%2520Quality-Color%2520camera%2520ISP%2520configuration)
 2. Try keeping camera sensitivity low - [Low-light increased
    sensitivity](#Image%2520Quality-Low-light%2520increased%2520sensitivity)
 3. [Camera tuning](#Image%2520Quality-Camera%2520tuning) with custom tuning blobs
 4. Ways to reduce [Motion blur](#Image%2520Quality-Motion%2520blur) effects

For best IQ, we suggest testing it yourself for your specific application. You can use [RGB Camera
Control](https://docs.luxonis.com/software/depthai/examples/rgb_camera_control.md) to try out different ISP configurations and
exposure/sensitivity values dynamically (live).

## Color camera ISP configuration

You can configure [ColorCamera](https://docs.luxonis.com/software/depthai-components/nodes/color_camera.md#colorcamera) ISP values
such as sharpness, luma denoise, and chroma denoise, which can improve IQ. We have noticed that sometimes these values provide
better results:

```python
camRgb = pipeline.create(dai.node.ColorCamera)
camRgb.initialControl.setSharpness(0)     # range: 0..4, default: 1
camRgb.initialControl.setLumaDenoise(0)   # range: 0..4, default: 1
camRgb.initialControl.setChromaDenoise(4) # range: 0..4, default: 1
```

The zoomed-in image above showcases the IQ difference between ISP configuration ([discussion
here](https://discuss.luxonis.com/d/554-image-pipelines-compression-and-filtering/8)). Note that for best IQ, you would need to
test and evaluate these values for your specific application.

On the Wide FOV cameras, you can select between wide FOV IMX378 and OV9782. In general, the IQ of OV9782 won't be as good as
IMX378, as the resolution is much lower, and it's harder to deal with sharpness/noise at low resolutions. With high resolution the
image can be downscale and noise would be less visible. And even though OV9782 has quite large pixels, in general the noise levels
of global shutters are more significant than for rolling shutter.

## Low-light increased sensitivity

On the image below you can see how different sensitivity values affect the IQ. Sensitivity will only add analog gain, which will
increase the image noise. When in low-light environment, one should always increase exposure, not sensitivity. Note that by
default, depthai will always do so - but when using 30FPS, max exposure is 33ms. For the right image below, we have set
ColorCamera to 10 FPS, so we were able to increase exposure to 100ms.

About 15x digitally zoomed-in image of a standard A4 camera tuning target at 420cm (40 lux). We have used 12MP IMX378 (on OAK-D)
for this image.

## Camera tuning

Our library supports setting camera IQ tuning blob, which would be used for all cameras. By default, cameras will use a general
tuning blob, which works great in most cases - so changing the camera tuning blob is not needed for most cases.

```py
import depthai as dai

pipeline = dai.Pipeline()
pipeline.setCameraTuningBlobPath('/path/to/tuning.bin')
```

Available tuning blobs

To tune your own camera sensors, one would need Intel's software, for which a license is needed

 * so the majority of people will only be able to use pre-tuned blobs. Currently available tuning blobs:

 * Mono tuning for low-light environments
   [here](https://artifacts.luxonis.com/artifactory/luxonis-depthai-data-local/misc/tuning_mono_low_light.bin). This allows
   auto-exposure to go up to 200ms (otherwise limited with default tuning to 33ms). For 200ms auto-exposure, you also need to
   limit the FPS (monoRight.setFps(5))

 * Color tuning for low-light environments
   [here](https://artifacts.luxonis.com/artifactory/luxonis-depthai-data-local/misc/tuning_color_low_light.bin). Comparison below.
   This allows auto-exposure to go up to 100ms (otherwise limited with default tuning to 33ms). For 200ms auto-exposure, you also
   need to limit the FPS (rgbCam.setFps(10)). Known limitation: flicker can be seen with auto-exposure over 33ms, it is caused by
   auto-focus working in continuous mode. A workaround is to change from CONTINUOUS_VIDEO ( default) to AUTO (focusing only once
   at init, and on further focus trigger commands): camRgb.initialControl.setAutoFocusMode(dai.CameraControl.AutoFocusMode.AUTO)

 * OV9782 Wide FOV color tuning for sunlight environments
   [here](https://artifacts.luxonis.com/artifactory/luxonis-depthai-data-local/misc/tuning_color_ov9782_wide_fov.bin). Fixes lens
   color filtering on direct sunlight, see [blog post here](https://www.luxonis.com/blog/lens_color_filtering_enhancement). It
   also improves LSC (Lens Shading Correction). Currently doesn't work for OV9282, so when used on eg. Series 2 OAK with Wide FOV
   cams, mono cameras shouldn't be enabled.

 * Camera exposure limit: [max
   500us](https://artifacts.luxonis.com/artifactory/luxonis-depthai-data-local/misc/tuning_exp_limit_500us.bin), [max
   8300us](https://artifacts.luxonis.com/artifactory/luxonis-depthai-data-local/misc/tuning_exp_limit_8300us.bin). These tuning
   blobs will limit the maximum exposure time, and instead start increasing ISO (sensitivity) after max exposure time is reached.
   This is a useful approach to reduce the [Motion blur](#Image%2520Quality-Motion%2520blur).

## Motion blur

[Motion blur](https://en.wikipedia.org/wiki/Motion_blur) appears when the camera shutter is opened for a longer time, and the
object is moving during that time.

The animation shows the difference between a shorter (left) and a longer (right) exposure time. During a shorter exposure time,
even a fast moving object can move a shorter distance, which causes less motion blur.

In the image above the right foot moved about 50 pixels during the exposure time, which results in a blurry image in that region.
The left foot was on the ground the whole time of the exposure, so it's not blurry.

In high-vibration environments we recommend using Fixed-Focus color camera, as otherwise the Auto-Focus lens will be shaking and
cause blurry images [docs here](https://docs.luxonis.com/hardware/platform/sensors/focus-type.md).

Potential workarounds:

 * Have better (brighter) lighting, which will cause the camera to use a shorter exposure time, and thus reduce motion blur.
 * Limit the shutter (exposure) time - this will decrease the motion blur, but will also decrease the light that reaches the
   sensor, so the image will be darker. You could either use a larger sensor (so more photons hit the sensor) or use a higher ISO
   (sensitivity) value. One option to limit max exposure time is by using a [Camera tuning
   blob](#Image%2520Quality-Camera%2520tuning).

```py
camRgb = pipeline.create(dai.node.ColorCamera)
# Max exposure limit in microseconds. After this time, ISO will be increased instead of exposure.
camRgb.initialControl.setAutoExposureLimit(10000) # Max 10ms
```

 * If the motion blur negatively affects your model's accuracy, you could fine-tune it to be more robust to motion blur by
   including motion blur images in your training dataset

## High Dynamic Range (HDR) Imaging

High Dynamic Range (HDR) captures multiple exposures at different brightness levels and merges them into a single image. This
approach preserves shadow detail that would otherwise be lost to noise while also keeping highlights from clipping, giving a
balanced image in scenes with both bright and dark regions.

### Supported Devices

HDR controls are available on the color sensors that ship with [OAK-1
Max](https://docs.luxonis.com/hardware/products/OAK-1%2520Max.md), [OAK-1
PoE](https://docs.luxonis.com/hardware/products/OAK-1%2520PoE.md), every OAK4 model, and FFC CBA builds that use the
[IMX708](https://docs.luxonis.com/hardware/sensors/IMX708.md), [IMX582](https://docs.luxonis.com/hardware/sensors/IMX582.md),
[IMX586](https://docs.luxonis.com/hardware/sensors/IMX586.md) modules. Each of these devices exposes the same DepthAI ColorCamera
interface, so the configuration steps below apply regardless of whether the pipeline runs directly on the device or from a host.

### QBC HDR

HDR-capable sensors use Quad Bayer Coding (QBC) HDR, capturing long, middle, and short exposures in every 2×2 pixel block
simultaneously. The sensor then merges the trio into a 16-bit frame, runs tone control, and compresses the result back to a 10-bit
stream that preserves both highlight and shadow detail.

> ℹ️ Note: Because exposures occur in parallel on the 2×2 Quad Bayer pixel structure with different durations, some motion
artifacts may appear in fast-moving scenes.

### Configuring HDR Controls

Configure HDR before starting the pipeline by setting ColorCamera controls:

```python
colorCam.initialControl.setMisc("hdr-exposure-ratio", 4)  # enables HDR when set > 1 (options: 2, 4, 8)
colorCam.initialControl.setMisc("hdr-local-tone-weight", 75)  # range: 0..100
```

### Exposure Ratio

The hdr-exposure-ratio controls the relative exposure times of three separate captures that compose a single HDR image:

 * Long Exposure: This is the base exposure time, typically set manually or by auto-exposure.
 * Middle Exposure: Calculated as long exposure time / hdr-exposure-ratio.
 * Short Exposure: Calculated as long exposure time / (hdr-exposure-ratio * hdr-exposure-ratio).

For example, with an hdr-exposure-ratio of 4 and a long exposure of 100ms:

 * Long Exposure: 100ms
 * Middle Exposure: 25ms (100ms / 4)
 * Short Exposure: 6.25ms (100ms / 16)

### Local Tone Weight

The hdr-local-tone-weight helps balance how brightness and contrast adjustments are applied across the image:

 * Local Tone Mapping: Adjusts brightness and contrast in small areas to preserve details and textures.
 * Global Tone Mapping: Modifies the entire image to keep it looking balanced and realistic.

Higher hdr-local-tone-weight values enhance details but may lead to unnatural contrasts. Lower values help the image look uniform
but could reduce clarity in detailed areas.OAK-1 Max

### HDR Comparison

Below is a side-by-side comparison of HDR versus non-HDR on the IMX582 sensor using [OAK-1
Max](https://docs.luxonis.com/hardware/products/OAK-1%2520Max.md). The left image is underexposed, hiding detail in the shadows.
The middle image is overexposed, washing out highlights. The right image, captured with HDR enabled, retains detail across the
entire dynamic range.

The HDR image was produced using
[camera_hdr.py](https://github.com/luxonis/depthai-core/blob/main/examples/python/Camera/camera_hdr.py) script.
