Variational Multi-Prototype Encoder for Object Recognition Using Multiple Prototype Images
- Authors
- Junseok, Kang; Ahn, Sang Chul
- Issue Date
- 2022-02
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Citation
- IEEE Access, v.10, pp.19586 - 19598
- Abstract
- In the recent research of Variational Prototyping-Encoder (VPE), the problem of classifying 2D flat objects of the unseen class has been addressed. VPE solves this problem by pre-learning the image translation task from real-world object images to their corresponding prototype images as a meta-task. VPE uses a single prototype for each object class. However, in general, a single prototype is not sufficient to represent a generic object class because the appearance can change significantly according to viewpoints and other factors. In this case, using VPE and a single prototype for each class in training can result in overfitting or performance degradation. One solution may be the use of multiple prototypes. However, this also requires costly sub-labeling for dividing the input class into smaller classes and assigning a prototype to each. Therefore, we propose a new learning method, the variational multi-prototype encoder (VaMPE), which can overcome the limitations of VPE and use multiple prototypes for each object class. The proposed method does not require additional sub-labeling other than simply adding multiple prototypes to each class. Through various experiments, we demonstrate that the proposed method outperforms VPE.
- Keywords
- Prototypes; Task analysis; Training; Feature extraction; Deep learning; Perturbation methods; Neural networks; Deep learning; variational encoder; prototype learning; embedding space; image classification
- ISSN
- 2169-3536
- URI
- https://pubs.kist.re.kr/handle/201004/115654
- DOI
- 10.1109/ACCESS.2022.3151856
- Appears in Collections:
- KIST Article > 2022
- Files in This Item:
There are no files associated with this item.
- Export
- RIS (EndNote)
- XLS (Excel)
- XML
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.