Dual-path Adaptation from Image to Video Transformers

Authors
Park, JunginLee, JiyoungSohn, Kwanghoon
Issue Date
2023-06
Publisher
IEEE COMPUTER SOC
Citation
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.2203 - 2213
Abstract
In this paper, we efficiently transfer the surpassing representation power of the vision foundation models, such as ViT and Swin, for video understanding with only a few trainable parameters. Previous adaptation methods have simultaneously considered spatial and temporal modeling with a unified learnable module but still suffered from fully leveraging the representative capabilities of image transformers. We argue that the popular dual-path (two-stream) architecture in video models can mitigate this problem. We propose a novel DUALPATH adaptation separated into spatial and temporal adaptation paths, where a lightweight bottleneck adapter is employed in each transformer block. Especially for temporal dynamic modeling, we incorporate consecutive frames into a grid-like frameset to precisely imitate vision transformers' capability that extrapolates relationships between tokens. In addition, we extensively investigate the multiple baselines from a unified perspective in video understanding and compare them with DUALPATH. Experimental results on four action recognition benchmarks prove that pretrained image transformers with DUALPATH can be effectively generalized beyond the data domain.
ISSN
1063-6919
URI
https://pubs.kist.re.kr/handle/201004/76431
DOI
10.1109/CVPR52729.2023.00219
Appears in Collections:
KIST Conference Paper > 2023
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE