Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 May 10;10(3):2041669519841073.
doi: 10.1177/2041669519841073. eCollection 2019 May-Jun.

Image Content Enhancement Through Salient Regions Segmentation for People With Color Vision Deficiencies

Affiliations

Image Content Enhancement Through Salient Regions Segmentation for People With Color Vision Deficiencies

Alessandro Bruno et al. Iperception. .

Abstract

Color vision deficiencies affect visual perception of colors and, more generally, color images. Several sciences such as genetics, biology, medicine, and computer vision are involved in studying and analyzing vision deficiencies. As we know from visual saliency findings, human visual system tends to fix some specific points and regions of the image in the first seconds of observation summing up the most important and meaningful parts of the scene. In this article, we provide some studies about human visual system behavior differences between normal and color vision-deficient visual systems. We eye-tracked the human fixations in first 3 seconds of observation of color images to build real fixation point maps. One of our contributions is to detect the main differences between the aforementioned human visual systems related to color vision deficiencies by analyzing real fixation maps among people with and without color vision deficiencies. Another contribution is to provide a method to enhance color regions of the image by using a detailed color mapping of the segmented salient regions of the given image. The segmentation is performed by using the difference between the original input image and the corresponding color blind altered image. A second eye-tracking of color blind people with the images enhanced by using recoloring of segmented salient regions reveals that the real fixation points are then more coherent (up to 10%) with the normal visual system. The eye-tracking data collected during our experiments are in a publicly available dataset called Eye-Tracking of Color Vision Deficiencies.

Keywords: color vision deficiencies; eye movements; eye-tracking; image enhancement; image segmentation; imagery; visual saliency.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Unlike people with normal vision system, people with dichromatic vision system are able to easily recognize the word “NO.”
Figure 2.
Figure 2.
Images taken during the eye-tracking session: Starting from the calibration (the far-left image), the eye-tracker records the eye movements, the saccadic movements, and the scanpaths.
Figure 3.
Figure 3.
The visual perception of an image can be represented by the fixation points (red diamonds overlaid on the images) for both normal vision system (left column) and color blind vision system (right column). Some details are missed by people with color vision deficiencies and this is revealed by the lack of fixation points on the details noticed by people with normal vision system.
Figure 4.
Figure 4.
Due to the missing of a direct conversion between sRGB and CIE L*a*b*, first we went through a conversion between sRGB and CIE XYZ and between a CIE XYZ and CIE L*a*b* as shown in the scheme.
Figure 5.
Figure 5.
RGB to CIE L*a*b* conversion allows us to manage with color mapping within color frequencies well perceived by color blind people.
Figure 6.
Figure 6.
(b) The saliency error is computed as the difference of the saliency maps of (a) the original image and the color blind version of the image. (c) The saliency error regions are segmented and color boosted in CIE L*a*b* color space by using the opposite value with the a* and b* channels, and (d) the enhancement is also mapped in the color blind domain.
Figure 7.
Figure 7.
The highlights of perceptual differences. For a given image (left), some enhancement methods use the average of the differences of L*a*b* channels between the original image and the color blinded version (center). We adopted the difference of L*a*b* channels between original image and the version with color blind weighted by the saliency difference (right).
Figure 8.
Figure 8.
The fixation points (red diamonds overlaid on the images) of observers with normal color vision system (left column) and the ones of people with protanopia. The images from the right column are from the second eye-tracking session when observing the images enhanced by our method.
Figure 9.
Figure 9.
Average NSS and AUC score of the best 10, 20, 30, and 50 cases within protanopia case study. Repeated-measures ANOVA returned *p between groups lower than .05.
Figure 10.
Figure 10.
Average NSS and AUC score of the best 10, 20, 30, and 50 cases within deuteranopia case study. Repeated-measures ANOVA returned *p between groups lower than .05.
Figure 11.
Figure 11.
For (a) a given image we collected the fixation points from (b) a normal observer, (c) an observer with protanopia and (d) an observer with deuteranopia.
Figure 12.
Figure 12.
The enhancement assessment on (a and b) the images is supported by the fixation point maps for observers with (c) protanopia and (d) deuteranopia.
Figure 13.
Figure 13.
For a given image (first column), we collected the fixation points from normal observer (second column), from observers with protanopia (third column), from observers with protanopia looking at the enhanced image (fourth column).
Figure 14.
Figure 14.
For a given image (first column), we collected the fixation points from normal observers (second column), from observers with deuteranopia (third column), from observers with deuteranopia looking at the enhanced image (fourth column).

Similar articles

References

    1. Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009). Frequency-tuned salient region detection. In IEEE international conference on computer vision and pattern recognition (pp. 1597–1604). Piscataway, NJ: IEEE.
    1. Ardizzone, E., Bruno, A., & Gugliuzza, F. (2017). Exploiting visual saliency algorithms for object-based attention: A new color and scale-based approach. In International Conference on Image Analysis and Processing (pp. 191–201). Berlin, Germany: Springer.
    1. Ardizzone, E., Bruno, A., & Mazzola, G. (2011). Visual saliency by keypoints distribution analysis. In International Conference on Image Analysis and Processing (pp. 691–699). Berlin, Germany: Springer.
    1. Ardizzone, E., Bruno, A., & Mazzola, G. (2013a). Saliency based image cropping. In International Conference on Image Analysis and Processing (pp. 773–782). Berlin, Germany: Springer.
    1. Ardizzone E., Bruno A., Mazzola G. (2013. b) Scale detection via keypoint density maps in regular or near-regular textures. Pattern Recognition Letters 34: 2071–2078.

How to cite this article

    1. Bruno A., Gugliuzza F., Ardizzone E., Giunta C. C., Pirrone R. (2019) Image Content Enhancement Through Salient Regions Segmentation for People With Color Vision Deficiencies. i-Perception 10(3): 1–21. doi:10.1177/2041669519841073. - PMC - PubMed