LNCS Homepage
ContentsAuthor IndexSearch

Joint Sparsity-Based Robust Multimodal Biometrics Recognition

Sumit Shekhar1, Vishal M. Patel1, Nasser M. Nasrabadi2, and Rama Chellappa1

1University of Maryland, College Park, USA

2Army Research Lab, Adelphi, USA

Abstract. Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a novel multimodal multivariate sparse representation method for multimodal biometrics recognition, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information between biometric modalities. Furthermore, the model is modified to make it robust to noise and occlusion. The resulting optimization problem is solved using an efficient alternative direction method. Experiments on a challenging public dataset show that our method compares favorably with competing fusion-based methods.

LNCS 7585, p. 365 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer-Verlag Berlin Heidelberg 2012