Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 May 27;21(11):3721.
doi: 10.3390/s21113721.

An Efficient and Accurate Iris Recognition Algorithm Based on a Novel Condensed 2-ch Deep Convolutional Neural Network

Affiliations

An Efficient and Accurate Iris Recognition Algorithm Based on a Novel Condensed 2-ch Deep Convolutional Neural Network

Guoyang Liu et al. Sensors (Basel). .

Abstract

Recently, deep learning approaches, especially convolutional neural networks (CNNs), have attracted extensive attention in iris recognition. Though CNN-based approaches realize automatic feature extraction and achieve outstanding performance, they usually require more training samples and higher computational complexity than the classic methods. This work focuses on training a novel condensed 2-channel (2-ch) CNN with few training samples for efficient and accurate iris identification and verification. A multi-branch CNN with three well-designed online augmentation schemes and radial attention layers is first proposed as a high-performance basic iris classifier. Then, both branch pruning and channel pruning are achieved by analyzing the weight distribution of the model. Finally, fast finetuning is optionally applied, which can significantly improve the performance of the pruned CNN while alleviating the computational burden. In addition, we further investigate the encoding ability of 2-ch CNN and propose an efficient iris recognition scheme suitable for large database application scenarios. Moreover, the gradient-based analysis results indicate that the proposed algorithm is robust to various image contaminations. We comprehensively evaluated our algorithm on three publicly available iris databases for which the results proved satisfactory for real-time iris recognition.

Keywords: convolutional neural network; deep learning; iris recognition; network pruning; online augmentation.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The whole architecture of our iris recognition algorithm. (a,b) demonstrate the verification and identification workflow, respectively.
Figure 2
Figure 2
The preprocessing stage including location and segmentation (a), normalization (b), longitudinal cropping and image enhancement (c), and optional horizontal cropping step (d). The region between inner and outer green boundary in (a) is the segmented iris.
Figure 3
Figure 3
The examples of the output for each different online augmentation layer. (a) is a normalized and enhanced iris image randomly picked from CASIA-V3-Interval database. (bd) are the example of brightness jitter, horizontal shift, and longitudinal scaling operation, respectively. The red rectangular window shown in (d) marks the mirrored part of the iris.
Figure 4
Figure 4
The architecture of the proposed convolutional neural network. (a) presents the full-size 2-ch CNN (Structure A). (b,c) illustrate the branch-pruned and channel-pruned CNN (Structure B and C), respectively. For convenience, a convolutional layer, batch normalization layer, and a ReLu activation layer are integrated into a convolution block (conv) in order. For a specific convolution block, the kernel size and the number of output channels are marked in the upper left and right corners of the box, respectively. All the convolution operations are with the stride of 1 in each direction. Moreover, for the max-pooling layer (maxpool), the pooling region and pooling stride are also marked in the upper left and right corners of the box, respectively.
Figure 5
Figure 5
The demonstration of channel level sparsity. Each entry in the matrix represents the L1 norm of the kernel. (a,b) illustrate the 2nd and 3rd convolutional layer channel map. The brighter element represents the more important kernel. (c,d) are the corresponding pruned channel map. The white regions are reserved while the black regions are discarded.
Figure 6
Figure 6
The sample iris images randomly picked from three databases.
Figure 7
Figure 7
The ROC curve of the proposed algorithm on database CASIA-V3-Interval.
Figure 8
Figure 8
(a) The discarding accuracies under different discarding ratio and different number of registered pictures. (b) The identification accuracies under different discarding ratio and different number of registered pictures.
Figure 9
Figure 9
The visualization of the convolution kernels randomly picked from each convolution layer. Each kernel in size of 3 × 3 or 5 × 5 is resized to a higher resolution by cubic interpolation.
Figure 10
Figure 10
The visualization of the radial attention layer in proposed CNN architecture. (a,b) correspond to the attention weight for radial attention layer 1 and 2, respectively.
Figure 11
Figure 11
The summary of the time consumption in different conditions. (a,b) correspond to the time consumption of screening procedure achieved by GPU and CPU, respectively. (c,d) the time consumption of identification procedure achieved by GPU and CPU, respectively.
Figure 12
Figure 12
(af) are the heat maps of ROI visualized by Grad-CAM algorithm. The most discriminative iris texture area is marked by red and yellow. (g) is the colorbar of the heat map, where the yellow and red area represents a higher score while the green and blue area correspond to the medium score, and the bottom 20% score is set to zero.

References

    1. Bowyer K.W., Hollingsworth K., Flynn P.J. Image understanding for iris biometrics: A survey. Comput. Vis. Image Underst. 2008;110:281–307. doi: 10.1016/j.cviu.2007.08.005. - DOI
    1. Nguyen K., Fookes C., Jillela R., Sridharan S., Ross A. Long range iris recognition: A survey. Pattern Recognit. 2017;72:123–143. doi: 10.1016/j.patcog.2017.05.021. - DOI
    1. Sheela S., Vijaya P. Iris recognition methods-survey. Int. J. Comput. Appl. Technol. 2010;3:19–25. doi: 10.5120/729-1022. - DOI
    1. Winston J.J., Hemanth D.J. A comprehensive review on iris image-based biometric system. Soft Comput. 2019;23:9361–9384. doi: 10.1007/s00500-018-3497-y. - DOI
    1. Bonnen K., Klare B.F., Jain A.K. Component-based representation in automated face recognition. IEEE Trans. Inf. Forensics Secur. 2012;8:239–253. doi: 10.1109/TIFS.2012.2226580. - DOI

LinkOut - more resources