Not logged in.
Quick Search - Contribution
Contribution Details
Type | Conference or Workshop Paper |
Scope | Discipline-based scholarship |
Published in Proceedings | Yes |
Title | Unsupervised Moving Object Detection via Contextual Information Separation |
Organization Unit | |
Authors |
|
Presentation Type | paper |
Item Subtype | Original Work |
Refereed | Yes |
Status | Published in final form |
Language |
|
ISBN | 978-1-7281-3293-8 |
Page Range | 879 - 888 |
Event Title | 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) |
Event Type | conference |
Event Location | Long Beach, CA, USA |
Event Start Date | July 15 - 2019 |
Event End Date | July 20 - 2019 |
Publisher | IEEE |
Abstract Text | We propose an adversarial contextual model for detecting moving objects in images. A deep neural network is trained to predict the optical flow in a region using information from everywhere else but that region (context), while another network attempts to make such context as uninformative as possible. The result is a model where hypotheses naturally compete with no need for explicit regularization or hyper-parameter tuning. Although our method requires no supervision whatsoever, it outperforms several methods that are pre-trained on large annotated datasets. Our model can be thought of as a generalization of classical variational generative region-based segmentation, but in a way that avoids explicit regularization or solution of partial differential equations at run-time. |
Digital Object Identifier | 10.1109/cvpr.2019.00097 |
Other Identification Number | merlin-id:20288 |
PDF File | Download from ZORA |
Export |
BibTeX
EP3 XML (ZORA) |