LNCS Homepage
ContentsAuthor IndexSearch

Continuous Regression for Non-rigid Image Alignment

Enrique Sánchez-Lozano1, Fernando De la Torre2, and Daniel González-Jiménez1

1Multimodal Information Area, Gradiant, Vigo, Pontevedra, 36310, Spain
esanchez@gradiant.org

2Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
ftorre@cs.cmu.edu

Abstract. Parameterized Appearance Models (PAMs) such as Active Appearance Models (AAMs), Morphable Models and Boosted Appearance Models have been extensively used for face alignment. Broadly speaking, PAMs methods can be classified into generative and discriminative. Discriminative methods learn a mapping between appearance features and motion parameters (rigid and non-rigid). While discriminative approaches have some advantages (e.g., feature weighting, improved generalization), they suffer from two major drawbacks: (1) they need large amounts of perturbed samples to train a regressor or classifier, making the training process computationally expensive in space and time. (2) It is not practical to uniformly sample the space of motion parameters. In practice, there are regions of the motion space that are more densely sampled than others, resulting in biased models and lack of generalization. To solve these problems, this paper proposes a computationally efficient continuous regressor that does not require the sampling stage. Experiments on real data show the improvement in memory and time requirements to train a discriminative appearance model, as well as improved generalization.

LNCS 7578, p. 250 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer-Verlag Berlin Heidelberg 2012