Treating Motion as Option to Reduce Motion Dependency in Unsupervised Video Object Segmentation

Authors
Cho, SuhwanLee, MinhyeokLee, SeunghoonPark, ChaewonKim, DonghyeongLee, Sangyoun
Issue Date
2023-01
Publisher
IEEE COMPUTER SOC
Citation
23rd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp.5129 - 5138
Abstract
Unsupervised video object segmentation (VOS) aims to detect the most salient object in a video sequence at the pixel level. In unsupervised VOS, most state-of-the-art methods leverage motion cues obtained from optical flow maps in addition to appearance cues to exploit the property that salient objects usually have distinctive movements compared to the background. However, as they are overly dependent on motion cues, which may be unreliable in some cases, they cannot achieve stable prediction. To reduce this motion dependency of existing two-stream VOS methods, we propose a novel motion-as-option network that optionally utilizes motion cues. Additionally, to fully exploit the property of the proposed network that motion is not always required, we introduce a collaborative network learning strategy. On all the public benchmark datasets, our proposed network affords state-of-the-art performance with real-time inference speed. Code and models are available at https://github.com/suhwan-cho/TMO.
ISSN
2472-6737
URI
https://pubs.kist.re.kr/handle/201004/76506
DOI
10.1109/WACV56688.2023.00511
Appears in Collections:
KIST Conference Paper > 2023
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE