Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Apr 6;19(7):1643.
doi: 10.3390/s19071643.

Robust Face Recognition Based on a New Supervised Kernel Subspace Learning Method

Affiliations

Robust Face Recognition Based on a New Supervised Kernel Subspace Learning Method

Ali Khalili Mobarakeh et al. Sensors (Basel). .

Abstract

Face recognition is one of the most popular techniques to achieve the goal of figuring out the identity of a person. This study has been conducted to develop a new non-linear subspace learning method named "supervised kernel locality-based discriminant neighborhood embedding," which performs data classification by learning an optimum embedded subspace from a principal high dimensional space. In this approach, not only nonlinear and complex variation of face images is effectively represented using nonlinear kernel mapping, but local structure information of data from the same class and discriminant information from distinct classes are also simultaneously preserved to further improve final classification performance. Moreover, in order to evaluate the robustness of the proposed method, it was compared with several well-known pattern recognition methods through comprehensive experiments with six publicly accessible datasets. Experiment results reveal that our method consistently outperforms its competitors, which demonstrates strong potential to be implemented in many real-world systems.

Keywords: biometrics; dimensionality reduction; face recognition; kernel trick; manifold learning; subspace learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The interactions by attraction and repulsion for the points between different classes.
Figure 2
Figure 2
A sample of pre-cropped face images in the Sheffield Face database [51].
Figure 3
Figure 3
(ak) The comparative recognition results changing the dimensionality of the transformation matrix for each given training number Tn on each data (Sheffield Face database).
Figure 3
Figure 3
(ak) The comparative recognition results changing the dimensionality of the transformation matrix for each given training number Tn on each data (Sheffield Face database).
Figure 4
Figure 4
(a) A subset of the original Yale database. (b) A subset of cropped images [53].
Figure 5
Figure 5
(ae). The comparative recognition results changing the dimensionality of the transformation matrix for each given training number Tn on each dataset (Yale Face database).
Figure 5
Figure 5
(ae). The comparative recognition results changing the dimensionality of the transformation matrix for each given training number Tn on each dataset (Yale Face database).
Figure 6
Figure 6
Example of six different subjects (each with 4 images) from the ORL database [56].
Figure 7
Figure 7
(af) The comparative recognition results changing the dimensionality of the transforming matrix for each given training number Tn on each data (ORL database).
Figure 8
Figure 8
A subset of some images of one subject from the Head Pose database [59].
Figure 9
Figure 9
(af) Comparative recognition results changing the dimensionality of the transformation matrix for each given training number Tn in each dataset (Head Pose database).
Figure 10
Figure 10
Example of captured images of one person in the Finger Vein database [61].
Figure 11
Figure 11
(ag). Comparative recognition results changing the dimensionality of the transformation matrix for each given training number Tn (Finger Vein database).
Figure 11
Figure 11
(ag). Comparative recognition results changing the dimensionality of the transformation matrix for each given training number Tn (Finger Vein database).
Figure 12
Figure 12
A cropped sample of the finger knuckle print (FKP) database [63].
Figure 13
Figure 13
(ai). Comparative recognition results changing the dimensionality of the transformation matrix for each given training number Tn (Finger Knuckle database).
Figure 13
Figure 13
(ai). Comparative recognition results changing the dimensionality of the transformation matrix for each given training number Tn (Finger Knuckle database).
Figure 14
Figure 14
(af). Maximum recognition rate of SKLDNE versus Wk for different number of training samples on Sheffield, Yale, ORL, Head Pose, Finger Vein and Finger Knuckle databases.
Figure 14
Figure 14
(af). Maximum recognition rate of SKLDNE versus Wk for different number of training samples on Sheffield, Yale, ORL, Head Pose, Finger Vein and Finger Knuckle databases.

References

    1. Yang M.-H., Ahuja N., Kriegman D. Face recognition using kernel eigenfaces; Proceedings of the Image Processing; Vancouver, BC, Canada. 10–13 September 2000.
    1. Belhumeur P.N., Hespanha J.P., Kriegman D.J. Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. Yale University; New Haven, CT, USA: 1997.
    1. Yanagawa Y., Sakuragi M., Minato Y. Face Identification Device. No. 7,853,052. U.S. Patent. 2010 Dec 14;
    1. Yang G., Xi X., Yin Y. Finger vein recognition based on a personalized best bit map. Sensors. 2012;12:1738–1757. doi: 10.3390/s120201738. - DOI - PMC - PubMed
    1. Rosdi B.A., Shing C.W., Suandi S.A. Finger vein recognition using local line binary pattern. Sensors. 2011;11:11357–11371. doi: 10.3390/s111211357. - DOI - PMC - PubMed

LinkOut - more resources