Full metadata record

DC Field Value Language
dc.contributor.authorKim, Hanjae-
dc.contributor.authorLee, Jiyoung-
dc.contributor.authorPark, Seongheon-
dc.contributor.authorSohn, Kwanghoon-
dc.date.accessioned2024-04-18T05:30:46Z-
dc.date.available2024-04-18T05:30:46Z-
dc.date.created2024-04-18-
dc.date.issued2023-10-
dc.identifier.issn1550-5499-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/149671-
dc.description.abstractCompositional zero-shot learning (CZSL) aims to recognize unseen compositions with prior knowledge of known primitives (attribute and object). Previous works for CZSL often suffer from grasping the contextuality between attribute and object, as well as the discriminability of visual features, and the long-tailed distribution of real-world compositional data. We propose a simple and scalable framework called Composition Transformer (CoT) to address these issues. CoT employs object and attribute experts in distinctive manners to generate representative embeddings, using the visual network hierarchically. The object expert extracts representative object embeddings from the final layer in a bottom-up manner, while the attribute expert makes attribute embeddings in a top-down manner with a proposed object-guided attention module that models contextuality explicitly. To remedy biased prediction caused by imbalanced data distribution, we develop a simple minority attribute augmentation (MAA) that synthesizes virtual samples by mixing two images and oversampling minority attribute classes. Our method achieves SoTA performance on several benchmarks, including MIT-States, C-GQA, and VAW-CZSL. We also demonstrate the effectiveness of CoT in improving visual discrimination and addressing the model bias from the imbalanced data distribution. The code is available at https://github.com/HanjaeKim98/CoT.-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.titleHierarchical Visual Primitive Experts for Compositional Zero-Shot Learning-
dc.typeConference-
dc.identifier.doi10.1109/ICCV51070.2023.00522-
dc.description.journalClass1-
dc.identifier.bibliographicCitationIEEE/CVF International Conference on Computer Vision (ICCV), pp.5652 - 5662-
dc.citation.titleIEEE/CVF International Conference on Computer Vision (ICCV)-
dc.citation.startPage5652-
dc.citation.endPage5662-
dc.citation.conferencePlaceUS-
dc.citation.conferencePlaceParis, FRANCE-
dc.citation.conferenceDate2023-10-02-
dc.relation.isPartOf2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV-
dc.identifier.wosid001159644305085-
dc.identifier.scopusid2-s2.0-85179035883-
Appears in Collections:
KIST Conference Paper > 2023
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE