Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Mar 9;22(6):2133.
doi: 10.3390/s22062133.

Self-Supervised Learning Framework toward State-of-the-Art Iris Image Segmentation

Affiliations

Self-Supervised Learning Framework toward State-of-the-Art Iris Image Segmentation

Wenny Ramadha Putri et al. Sensors (Basel). .

Abstract

Iris segmentation plays a pivotal role in the iris recognition system. The deep learning technique developed in recent years has gradually been applied to iris recognition techniques. As we all know, applying deep learning techniques requires a large number of data sets with high-quality manual labels. The larger the amount of data, the better the algorithm performs. In this paper, we propose a self-supervised framework utilizing the pix2pix conditional adversarial network for generating unlimited diversified iris images. Then, the generated iris images are used to train the iris segmentation network to achieve state-of-the-art performance. We also propose an algorithm to generate iris masks based on 11 tunable parameters, which can be generated randomly. Such a framework can generate an unlimited amount of photo-realistic training data for down-stream tasks. Experimental results demonstrate that the proposed framework achieved promising results in all commonly used metrics. The proposed framework can be easily generalized to any object segmentation task with a simple fine-tuning of the mask generation algorithm.

Keywords: biometrics; data augmentation; generative adversarial network; image semantic segmentation; iris segmentation.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
An overview of our framework.
Figure 2
Figure 2
Proposed method training architecture.
Figure 3
Figure 3
The generator architecture in the proposed method.
Figure 4
Figure 4
The discriminator architecture in the proposed method.
Figure 5
Figure 5
The process of automatic mask generation.
Figure 6
Figure 6
A pictorial explanation to determine the 11 parameters to generate the masks.
Figure 7
Figure 7
A pictorial example of generated masks for iris images. (a) The generated iris mask; (b) the generated periocular mask; (c) the generated iris image; (d) the overlay of iris mask, periocular mask and the generated image. It shows that the generated masks perfectly fit the iris image.
Figure 8
Figure 8
The loss curve of the proposed method on training data and testing data.
Figure 9
Figure 9
The pixel accuracy curve on training data and testing data.
Figure 10
Figure 10
The mean pixel accuracy curve on training data and testing data.
Figure 11
Figure 11
The mean intersection over union curve on training data and testing data.
Figure 12
Figure 12
The frequency weight intersection over union curve on training data and testing data.
Figure 13
Figure 13
The examples of generated images (trained with CASIA-Iris-Thousands, without glasses) using the proposed iris image generation network.
Figure 14
Figure 14
The examples of generated images (trained with CASIA-Iris-Thousands, with glasses) using the proposed iris image generation network.
Figure 15
Figure 15
The examples of generated images (trained with ICE dataset, with glasses). (a) original iris image; (b) ground-truth label for iris; (c) ground-truth label for periocular region; (d) generated images using (b,c) as the conditional inputs to the proposed image generation network.
Figure 16
Figure 16
The examples of generated images (trained with ICE dataset, without glasses). (a) original iris image; (b) ground-truth label for iris; (c) ground-truth label for periocular region; (d) generated images using (b,c) as the conditional inputs to the proposed image generation network.
Figure 17
Figure 17
The examples of generated images (trained with ICE dataset, with and without glasses). (a,d) generated iris mask; (b,e) generated periocular mask; (c) generated image with glasses; (f) generated image without glasses.
Figure 18
Figure 18
The performance of FCN network on customized dataset based on the value of: (a) Pixel Accuracy (PA); (b) Mean Pixel Accuracy (MPA); (c) Mean Intersection over Union (MIoU); (d) Frequency Weight IoU (FWIoU).
Figure 19
Figure 19
The performance of Deeplab network on customized dataset based on the value of: (a) Pixel Accuracy (PA); (b) Mean Pixel Accuracy (MPA); (c) Mean Intersection over Union (MIoU); (d) Frequency Weight IoU (FWIoU).
Figure 20
Figure 20
The examples of generated images by the proposed network.
Figure 21
Figure 21
The example of generated images by Minaee and Abdolrashidi [70].
Figure 22
Figure 22
The example of generated images by Yadav et al. [74].

References

    1. Li Y.-H., Putri W.R., Aslam M.S., Chang C.-C.J.S. Robust Iris Segmentation Algorithm in Non-Cooperative Environments Using Interleaved Residual U-Net. Sensors. 2021;21:1434. doi: 10.3390/s21041434. - DOI - PMC - PubMed
    1. Wang C., Zhu Y., Liu Y., He R., Sun Z. Joint iris segmentation and localization using deep multi-task learning framework. arXiv. 20191901.11195
    1. Li Y.-H., Savvides M. Automatic iris mask refinement for high performance iris recognition; Proceedings of the 2009 IEEE Workshop on Computational Intelligence in Biometrics: Theory, Algorithms, and Applications; Nashville, TN, USA. 30 March–2 April 2009; pp. 52–58.
    1. Li Y.-H., Savvides M. Biometrics Theory and Application. IEEE & Willey; New York, NY, USA: 2009. Iris Recognition, Overview.
    1. Zhao Z., Kumar A. Towards more accurate iris recognition using deeply learned spatially corresponding features; Proceedings of the IEEE International Conference on Computer Vision; Venice, Italy. 22–29 October 2017; pp. 3809–3818.