Temporally Consistent Occlusion Boundaries in Videos using Geometric Context

Qualitative results for occlusion boundary prediction: (left) Ground truth, (right) Predicted occlusion boundaries using geo- metric, flow, and appearance features with temporal smoothing (temporal window size=30). Occlusion and non-occlusion boundaries are shown in red and blue, respectively.


Authors

S. Hussain Raza
Ahmad Humayun
Matthias Grundmann
David Anderson
Irfan Essa

Abstract
We present an algorithm for finding temporally consistent occlusion boundaries in videos to support segmentation of dynamic scenes. We learn occlusion boundaries in a pairwise Markov random field (MRF) framework. We first estimate the probability of an spatio-temporal edge being an occlusion boundary by using appearance, flow, and geometric features. Next, we enforce occlusion boundary continuity in a MRF model by learning pairwise occlusion probabilities using a random forest. Then, we temporally smooth boundaries to remove temporal inconsistencies in occlusion boundary estimation. Our proposed framework provides an efficient approach for finding temporally consistent occlusion boundaries in video by utilizing causality, redundancy in videos, and semantic layout of the scene. We have developed a dataset with fully annotated ground-truth occlusion boundaries of over 30 videos (∼5000 frames). This dataset is used to evaluate temporal occlusion boundaries and provides a much needed baseline for future studies. We perform experiments to demonstrate the role of scene layout, and temporal information for occlusion reasoning in video of dynamic scenes.
Paper

Download PDF

Video
Citation
@article{TemporalOcclusionBoundaries2015,
   author = {S. Hussain Raza and Ahmad Humayun and Matthias Grundmann and David Anderson and Irfan Essa},
   title = {Temporally Consistent Occlusion Boundaries in Videos using Geometric Context},
   journal = {WACV},
   year = {2015},
 }  
Dataset
You can download the original videos, and ground-truth.
videoOcclusion_dataset
For questions, please email e-mail.
Funding
This research is supported by:
  • DARPA Grant
  • Google Grant
Copyright

This material is based in part on research by the Defense Advanced Research Projects Agency (DARPA) under Contract No. W31P4Q-10-C-0214, and by a Google Grant and and Google PhD Fellowship for Matthias Grundman, who participated in this research as a Graduate Student at Georgia Tech. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any of the sponsors funding this research.