Emeršič, Žiga; V., A. Kumar S.; Harish, B. S.; Gutfeter, W.; Khiarak, J. N.; Pacut, A.; Hansley, E.; Segundo, M. Pamplona; Sarkar, S.; Park, H.; Nam, G. Pyo; Kim, I. J.; Sangodkar, S. G.; Kacar, U.; Kirci, M.; Yuan, L.; Yuan, J.; Zhao, H.; Lu, F.; Mao, J.; Zhang, X.; Yaman, D.; Eyiokur, F. I.; Ozler, K. B.; Ekenel, H. K.; Chowdhury, D. Paul; Bakshi, S.; Sa, P. K.; Majhni, B.; Peer, P.; Štruc, V.
The Unconstrained Ear Recognition Challenge 2019 Inproceedings
In: International Conference on Biometrics (ICB 2019), 2019.
This paper presents a summary of the 2019 Unconstrained Ear Recognition Challenge (UERC), the second in a series of group benchmarking efforts centered around the problem of person recognition from ear images captured in uncontrolled settings. The goal of the challenge is to assess the performance of existing ear recognition techniques on a challenging large-scale ear dataset and to analyze performance of the technology from various viewpoints, such as generalization abilities to unseen data characteristics, sensitivity to rotations, occlusions and image resolution and performance bias on sub-groups of subjects, selected based on demographic criteria, i.e. gender and ethnicity. Research groups from 12 institutions entered the competition and submitted a total of 13 recognition approaches ranging from descriptor-based methods to deep-learning models. The majority of submissions focused on ensemble based methods combining either representations from multiple deep models or hand-crafted with learned image descriptors. Our analysis shows that methods incorporating deep learning models clearly outperform techniques relying solely on hand-crafted descriptors, even though both groups of techniques exhibit similar behaviour when it comes to robustness to various covariates, such presence of occlusions, changes in (head) pose, or variability in image resolution. The results of the challenge also show that there has been considerable progress since the first UERC in 2017, but that there is still ample room for further research in this area.