UNDERSTANDING DYNAMIC SCENES BY HIERARCHICAL MOTION PATTERN MINING
Lei Song, Fan Jiang, Zhongke Shi, Aggelos KatsaggelosAbstract
Our work addresses the problem of analyzing and understanding dynamic video scenes. A two-level motion pattern mining approach is proposed. At the first level, single-agent motion patterns are modeled as distributions over pixel-based features. At the second level, interaction patterns are modeled as distributions over single-agent motion patterns. Both patterns are shared among video clips. Compared to other works, the advantage of our method is that interaction patterns are detected and assigned to every video frame. This enables a finer semantic interpretation and more precise anomaly detection. Specifically, every video frame is labeled by a certain interaction pattern and moving pixels in each frame which do not belong to any single-agent pattern or cannot exist in the corresponding interaction pattern are detected as anomalies. We have tested our approach on a challenging traffic surveillance sequence containing both pedestrian and vehicular motions and obtained promising results.
Read Submission [530]