Maximizing the Position Embedding for Vision Transformers with Global Average Pooling

Authors
Lee, WonjunHam, BumsubKim, Suhyun
Issue Date
2025-02
Publisher
ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE
Citation
39th AAAI Conference on Artificial Intelligence, pp.18154 - 18162
Abstract
In vision transformers, position embedding (PE) plays a crucial role in capturing the order of tokens. However, in vision transformer structures, there is a limitation in the expressiveness of PE due to the structure where position embedding is simply added to the token embedding. A layer-wise method that delivers PE to each layer and applies independent Layer Normalizations for token embedding and PE has been adopted to overcome this limitation. In this paper, we identify the conflicting result that occurs in a layer-wise structure when using the global average pooling (GAP) method instead of the class token. To overcome this problem, we propose MPVG, which maximizes the effectiveness of PE in a layer-wise structure with GAP. Specifically, we identify that PE counterbalances token embedding values at each layer in a layer-wise structure. Furthermore, we recognize that the counterbalancing role of PE is insufficient in the layer-wise structure, and we address this by maximizing the effectiveness of PE through MPVG. Through experiments, we demonstrate that PE performs a counterbalancing role and that maintaining this counterbalancing directionality significantly impacts vision transformers. As a result, the experimental results show that MPVG outperforms existing methods across vision transformers on various tasks.
ISSN
2159-5399
URI
https://pubs.kist.re.kr/handle/201004/153098
DOI
10.1609/aaai.v39i17.33997
Appears in Collections:
KIST Conference Paper > Others
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE