Face-Periocular Cross-Identification via Contrastive Hybrid Attention Vision Transformer
- Authors
- Tiong, Leslie Ching Ow; Sigmund, Dick; Teoh, Andrew Beng Jin
- Issue Date
- 2023-03
- Publisher
- Institute of Electrical and Electronics Engineers
- Citation
- IEEE Signal Processing Letters, v.30, pp.254 - 258
- Abstract
- Traditional biometrics identification performs matching between probe and gallery that may involve the same single or multiple biometric modalities. This letter presents a cross-matching scenario where the probe and gallery are from two distinctive biometrics, i.e., face and periocular, coined as face-periocular cross-identification (FPCI). We propose a novel contrastive loss tailored for face-periocular cross-matching to learn a joint embedding, which can be used as a gallery or a probe regardless of the biometric modality. On the other hand, a hybrid attention vision transformer is devised. The hybrid attention module performs depth-wise convolution and conv-based multi-head self-attention in parallel to aggregate global and local features of the face and periocular biometrics. Extensive experiments on three benchmark datasets demonstrate that our model sufficiently improves the performance of FPCI. Besides that, a new face-periocular dataset in the wild, the Cross-modal Face-periocular dataset, is developed for the FPCI models training.
- Keywords
- RECOGNITION; Face recognition; Probes; Faces; Transformers; Training; Feature extraction; Biological system modeling; Biometrics cross-identification; face-periocular contrastive learning; conv-based attention mechanism; vision transformer
- ISSN
- 1070-9908
- URI
- https://pubs.kist.re.kr/handle/201004/113917
- DOI
- 10.1109/LSP.2023.3256320
- Appears in Collections:
- KIST Article > 2023
- Files in This Item:
There are no files associated with this item.
- Export
- RIS (EndNote)
- XLS (Excel)
- XML
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.