![]() |
|
||
Background Inpainting for Videos with Dynamic Objects and a Free-Moving CameraMiguel Granados1, Kwang In Kim1, James Tompkin1, 2, 3, Jan Kautz2, and Christian Theobalt1 1Max-Planck-Institut für Informatik, Campus E1 4, 66123, Saarbrücken, Germany 2University College London, Malet Place, WC1E 6BT, London, UK 3Intel Visual Computing Institute, Campus E2 1, 66123, Saarbrücken, Germany Abstract. We propose a method for removing marked dynamic objects from videos captured with a free-moving camera, so long as the objects occlude parts of the scene with a static background. Our approach takes as input a video, a mask marking the object to be removed, and a mask marking the dynamic objects to remain in the scene. To inpaint a frame, we align other candidate frames in which parts of the missing region are visible. Among these candidates, a single source is chosen to fill each pixel so that the final arrangement is color-consistent. Intensity differences between sources are smoothed using gradient domain fusion. Our frame alignment process assumes that the scene can be approximated using piecewise planar geometry: A set of homographies is estimated for each frame pair, and one each is selected for aligning pixels such that the color-discrepancy is minimized and the epipolar constraints are maintained. We provide experimental validation with several real-world video sequences to demonstrate that, unlike in previous work, inpainting videos shot with free-moving cameras does not necessarily require estimation of absolute camera positions and per-frame per-pixel depth maps. Keywords: video processing, video completion, video inpainting, image alignment, background estimation, free-camera, graph-cuts LNCS 7572, p. 682 ff. lncs@springer.com
|