Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tiong, Leslie Ching Ow | - |
dc.contributor.author | Sigmund, Dick | - |
dc.contributor.author | Teoh, Andrew Beng Jin | - |
dc.date.accessioned | 2024-01-19T10:01:21Z | - |
dc.date.available | 2024-01-19T10:01:21Z | - |
dc.date.created | 2023-04-20 | - |
dc.date.issued | 2023-03 | - |
dc.identifier.issn | 1070-9908 | - |
dc.identifier.uri | https://pubs.kist.re.kr/handle/201004/113917 | - |
dc.description.abstract | Traditional biometrics identification performs matching between probe and gallery that may involve the same single or multiple biometric modalities. This letter presents a cross-matching scenario where the probe and gallery are from two distinctive biometrics, i.e., face and periocular, coined as face-periocular cross-identification (FPCI). We propose a novel contrastive loss tailored for face-periocular cross-matching to learn a joint embedding, which can be used as a gallery or a probe regardless of the biometric modality. On the other hand, a hybrid attention vision transformer is devised. The hybrid attention module performs depth-wise convolution and conv-based multi-head self-attention in parallel to aggregate global and local features of the face and periocular biometrics. Extensive experiments on three benchmark datasets demonstrate that our model sufficiently improves the performance of FPCI. Besides that, a new face-periocular dataset in the wild, the Cross-modal Face-periocular dataset, is developed for the FPCI models training. | - |
dc.language | English | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.title | Face-Periocular Cross-Identification via Contrastive Hybrid Attention Vision Transformer | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/LSP.2023.3256320 | - |
dc.description.journalClass | 1 | - |
dc.identifier.bibliographicCitation | IEEE Signal Processing Letters, v.30, pp.254 - 258 | - |
dc.citation.title | IEEE Signal Processing Letters | - |
dc.citation.volume | 30 | - |
dc.citation.startPage | 254 | - |
dc.citation.endPage | 258 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.identifier.wosid | 000958573100003 | - |
dc.identifier.scopusid | 2-s2.0-85151401867 | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalResearchArea | Engineering | - |
dc.type.docType | Article | - |
dc.subject.keywordPlus | RECOGNITION | - |
dc.subject.keywordAuthor | Face recognition | - |
dc.subject.keywordAuthor | Probes | - |
dc.subject.keywordAuthor | Faces | - |
dc.subject.keywordAuthor | Transformers | - |
dc.subject.keywordAuthor | Training | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Biological system modeling | - |
dc.subject.keywordAuthor | Biometrics cross-identification | - |
dc.subject.keywordAuthor | face-periocular contrastive learning | - |
dc.subject.keywordAuthor | conv-based attention mechanism | - |
dc.subject.keywordAuthor | vision transformer | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.