Feature Disentanglement Learning with Switching and Aggregation for Video-based Person Re-Identification

Authors
Kim, MinjungCho, MyeongAhLee, Sangyoun
Issue Date
2023-01
Publisher
IEEE COMPUTER SOC
Citation
23rd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp.1603 - 1612
Abstract
In video person re-identification (Re-ID), the network must consistently extract features of the target person from successive frames. Existing methods tend to focus only on how to use temporal information, which often leads to networks being fooled by similar appearances and same backgrounds. In this paper, we propose a Disentanglement and Switching and Aggregation Network (DSANet), which segregates the features representing identity and features based on camera characteristics, and pays more attention to ID information. We also introduce an auxiliary task that utilizes a new pair of features created through switching and aggregation to increase the network's capability for various camera scenarios. Furthermore, we devise a Target Localization Module (TLM) that extracts robust features against a change in the position of the target according to the frame flow and a Frame Weight Generation (FWG) that reflects temporal information in the final representation. Various loss functions for disentanglement learning are designed so that each component of the network can cooperate while satisfactorily performing its own role. Quantitative and qualitative results from extensive experiments demonstrate the superiority of DSANet over state-of-the-art methods on three benchmark datasets.
ISSN
2472-6737
URI
https://pubs.kist.re.kr/handle/201004/76510
DOI
10.1109/WACV56688.2023.00165
Appears in Collections:
KIST Conference Paper > 2023
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE