Mixup Mask Adaptation: Bridging the gap between input saliency and representations via attention mechanism in feature mixup

Authors
Kang, MinsooKang, MinkooLee, Seong-WhanKim, Suhyun
Issue Date
2024-06
Publisher
Elsevier BV
Citation
Image and Vision Computing, v.146
Abstract
The inherent complexity and extensive architecture of deep neural networks often lead to overfitting, compromising their ability to generalize to new, unseen data. One of the regularization techniques, data augmentation, is now considered vital to alleviate this, and mixup, which blends pairs of images and labels, has proven effective in enhancing model generalization. Recently, incorporating saliency in mixups has shown performance gains by retaining salient regions in mixed results. While these methods have become mainstream at the input level, their applications at the feature level remain under-explored. Our observations indicate that outcomes from naive applications of input saliency-based methods did not consistently lead to enhancements in performance. In this paper, we attribute these observations primarily to two challenges: 'Hard Boundary Issue' and 'Saliency Mismatch.' The Hard Boundary Issue describes a situation where masks with distinct, sharp edges work well at the input level, but lead to unintended distortions in the deeper layers. The Saliency Mismatch points to the disparity between saliency masks generated from input images and the saliency of feature maps. To tackle these challenges, we present a novel method called 'attention-based mixup mask adaptation' (MMA). This approach employs an attention mechanism to effectively adapt mixup masks, which are designed to maximize saliency at the input level, for feature augmentation purposes. We reduce the Saliency Mismatch problem by incorporating the spatial significance of the feature map into the mixup mask. Additionally, we address the Hard Boundary Issue by applying softmax to smoothen the adjusted mixup mask. Through comprehensive experiments, we validate our observations and confirm the effectiveness of applying MMA to saliency-aware mixup approaches at the feature level, as evidenced by the performance improvements on multiple benchmarks and the robustness improvements against corruption and deformation.
Keywords
NETWORKS; Regularization; Data augmentation; Mixup
ISSN
0262-8856
URI
https://pubs.kist.re.kr/handle/201004/150019
DOI
10.1016/j.imavis.2024.105013
Appears in Collections:
KIST Article > 2024
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE