2026 |
Lajić, Romanela; Peer, Peter; Štruc, Vitomir; Han, Dong Seog; Meden, Blaž; Emeršič, Žiga FACES: Facial Analysis with Compressed Efficient Systems Journal Article In: ICT Express, 2026. Abstract | Links | BibTeX | Tags: biometrics, distillation, face recognition, knowledge distillation @article{Romanela_ICT_Express,Due to their promising performance vision transformers are increasingly being incorporated into various biometric solutions, mainly in the domain of face analysis. However, their size and computational expense remain the biggest challenge when it comes to their full utilization and there is a high demand for optimization of these models. In this paper we propose a novel pruning technique for face analysis vision transformers aimed at reducing their memory and computational cost. The method uses existing transformer parameters as importance scores, which allows for a simple one-shot pruning and retraining approach. By testing the method on the SWINFace transformer for both verification and attribute recognition tasks, we show that the models compressed up to 50% sparsity level maintain the performance or even outperform the original model, while also outperforming state-of-the-art vision transformer pruning methods and showing versatility for different face analysis tasks. |
2024 |
Boutros, Fadi; Štruc, Vitomir; Damer, Naser AdaDistill: Adaptive Knowledge Distillation for Deep Face Recognition Proceedings Article In: Proceedings of the European Conference on Computer Vision (ECCV 2024), pp. 1-20, 2024. Abstract | Links | BibTeX | Tags: adaptive distillation, biometrics, CNN, deep learning, face, face recognition, knowledge distillation @inproceedings{FadiECCV2024,Knowledge distillation (KD) aims at improving the performance of a compact student model by distilling the knowledge from a high-performing teacher model. In this paper, we present an adaptive KD approach, namely AdaDistill, for deep face recognition. The proposed AdaDistill embeds the KD concept into the softmax loss by training the student using a margin penalty softmax loss with distilled class centers from the teacher. Being aware of the relatively low capacity of the compact student model, we propose to distill less complex knowledge at an early stage of training and more complex one at a later stage of training. This relative adjustment of the distilled knowledge is controlled by the progression of the learning capability of the student over the training iterations without the need to tune any hyper-parameters. Extensive experiments and ablation studies show that AdaDistill can enhance the discriminative learning capability of the student and demonstrate superiority over various state-of-the-art competitors on several challenging benchmarks, such as IJB-B, IJB-C, and ICCV2021-MFR |