![]() |
|
||
Viewpoint Invariant Matching via Developable SurfacesBernhard Zeisl, Kevin Köser, and Marc Pollefeys Computer Vision and Geometry Group, ETH Zurich, Switzerlandzeislb@inf.ethz.ch kkoeser@inf.ethz.ch pomarc@inf.ethz.ch Abstract. Stereo systems, time-of-flight cameras, laser range sensors and consumer depth cameras nowadays produce a wealth of image data with depth information (RGBD), yet the number of approaches that can take advantage of color and geometry data at the same time is quite limited. We address the topic of wide baseline matching between two RGBD images, i.e. finding correspondences from largely different viewpoints for recognition, model fusion or loop detection. Here we normalize local image features with respect to the underlying geometry and show a significantly increased number of correspondences. Rather than moving a virtual camera to some position in front of a dominant scene plane, we propose to unroll developable scene surfaces and detect features directly in the “wall paper” of the scene. This allows viewpoint invariant matching also in scenes with curved architectural elements or with objects like bottles, cans or (partial) cones and others. We prove the usefulness of our approach using several real world scenes with different objects. LNCS 7584, p. 62 ff. lncs@springer.com
|