LNCS Homepage
ContentsAuthor IndexSearch

Fusion of Multiple Visual Cues for Visual Saliency Extraction from Wearable Camera Settings with Strong Motion

Hugo Boujut1, Jenny Benois-Pineau1, and Remi Megret2

1University of Bordeaux, LaBRI, UMR 5800, F-33400, Talence, France
hugo.boujut@labri.fr
benois-p@labri.fr

2University of Bordeaux, IMS, UMR 5218, F-33400, Talence, France
remi.megret@ims-bordeaux.fr

Abstract. In this paper we are interested in the saliency of visual content from wearable cameras. The subjective saliency in wearable video is studied first due to the psycho-visual experience on this content. Then the method for objective saliency map computation with a specific contribution based on geometrical saliency is proposed. Fusion of spatial, temporal and geometric cues in an objective saliency map is realized by the multiplicative operator. Resulting objective saliency maps are evaluated against the subjective maps with promising results, highlighting interesting performance of proposed geometric saliency model.

LNCS 7585, p. 436 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer-Verlag Berlin Heidelberg 2012