![]() |
|
||
Real-Time Camera Tracking: When is High Frame-Rate Best?Ankur Handa, Richard A. Newcombe, Adrien Angeli, and Andrew J. Davison Department of Computing, Imperial College London, UKahanda@doc.ic.ac.uk rnewcomb@doc.ic.ac.uk aangeli@doc.ic.ac.uk ajd@doc.ic.ac.uk Abstract. Higher frame-rates promise better tracking of rapid motion, but advanced real-time vision systems rarely exceed the standard 10–60Hz range, arguing that the computation required would be too great. Actually, increasing frame-rate is mitigated by reduced computational cost per frame in trackers which take advantage of prediction. Additionally, when we consider the physics of image formation, high frame-rate implies that the upper bound on shutter time is reduced, leading to less motion blur but more noise. So, putting these factors together, how are application-dependent performance requirements of accuracy, robustness and computational cost optimised as frame-rate varies? Using 3D camera tracking as our test problem, and analysing a fundamental dense whole image alignment approach, we open up a route to a systematic investigation via the careful synthesis of photorealistic video using ray-tracing of a detailed 3D scene, experimentally obtained photometric response and noise models, and rapid camera motions. Our multi-frame-rate, multi-resolution, multi-light-level dataset is based on tens of thousands of hours of CPU rendering time. Our experiments lead to quantitative conclusions about frame-rate selection and highlight the crucial role of full consideration of physical image formation in pushing tracking performance. LNCS 7578, p. 222 ff. lncs@springer.com
|