Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | ?. Emer?ic | - |
dc.contributor.author | A. Kumar S. V | - |
dc.contributor.author | B. S. Harish | - |
dc.contributor.author | W. Gutfeter | - |
dc.contributor.author | J. N. Khiarak | - |
dc.contributor.author | A. Pacut | - |
dc.contributor.author | E. Hansley | - |
dc.contributor.author | M. Pamplona Segundo | - |
dc.contributor.author | S. Sarkar | - |
dc.contributor.author | H. J. Park | - |
dc.contributor.author | Nam, Gi Pyo | - |
dc.contributor.author | Kim, Ig-Jae | - |
dc.contributor.author | S. G. Sangodkar | - |
dc.contributor.author | U. Kacar | - |
dc.contributor.author | M. Kirci | - |
dc.contributor.author | L. Yuan | - |
dc.contributor.author | J. Yuan | - |
dc.contributor.author | H. Zhao | - |
dc.contributor.author | F. Lu | - |
dc.contributor.author | J. Mao | - |
dc.contributor.author | X. Zhang | - |
dc.contributor.author | D. Yaman | - |
dc.contributor.author | F. I. Eyioku | - |
dc.contributor.author | K. B. ?zler | - |
dc.contributor.author | H. K. Ekenel | - |
dc.contributor.author | D. Paul Chowdhury | - |
dc.contributor.author | S. Bakshi | - |
dc.contributor.author | P. K. Sa | - |
dc.contributor.author | B. Majhi | - |
dc.contributor.author | P. Peer | - |
dc.contributor.author | V. ?truc | - |
dc.date.accessioned | 2024-01-12T04:43:18Z | - |
dc.date.available | 2024-01-12T04:43:18Z | - |
dc.date.created | 2023-10-25 | - |
dc.date.issued | 2019-06-05 | - |
dc.identifier.uri | https://pubs.kist.re.kr/handle/201004/78532 | - |
dc.description.abstract | This paper presents a summary of the 2019 Unconstrained Ear Recognition Challenge (UERC), the second in a series of group benchmarking efforts centered around the problem of person recognition from ear images captured in uncontrolled settings. The goal of the challenge is to assess the performance of existing ear recognition techniques on a challenging large-scale ear dataset and to analyze performance of the technology from various viewpoints, such as generalization abilities to unseen data characteristics, sensitivity to rotations, occlusions and image resolution and performance bias on sub-groups of subjects, selected based on demographic criteria, i.e. gender and ethnicity. Research groups from 12 institutions entered the competition and submitted a total of 13 recognition approaches ranging from descriptor-based methods to deep-learning models. The majority of submissions focused on ensemble based methods combining either representations from multiple deep models or hand-crafted with learned image descriptors. Our analysis shows that methods incorporating deep learning models clearly outperform techniques relying solely on hand-crafted descriptors, even though both groups of techniques exhibit similar behavior when it comes to robustness to various covariates, such presence of occlusions, changes in (head) pose, or variability in image resolution. The results of the challenge also show that there has been considerable progress since the first UERC in 2017, but that there is still ample room for further research in this area. | - |
dc.language | English | - |
dc.publisher | International Association for Pattern Recognition (IAPR) | - |
dc.title | The Unconstrained Ear Recognition Challenge 2019 | - |
dc.type | Conference | - |
dc.identifier.doi | 10.1109/ICB45273.2019.8987337 | - |
dc.description.journalClass | 1 | - |
dc.identifier.bibliographicCitation | IAPR International Conference on Biometrics 2019 (ICB) | - |
dc.citation.title | IAPR International Conference on Biometrics 2019 (ICB) | - |
dc.citation.conferencePlace | US | - |
dc.citation.conferencePlace | 그리스 | - |
dc.citation.conferenceDate | 2019-06-04 | - |
dc.relation.isPartOf | 2019 International Conference on Biometrics (ICB) | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.