2025 |
Batagelj, Borut; Kronovšek, Andrej; Štruc, Vitomir; Peer, Peter Robust cross-dataset deepfake detection with multitask self-supervised learning Članek v strokovni reviji V: ICT Express, str. 1-5, 2025. Povzetek | Povezava | BibTeX | Oznake: deepfake, deepfake DAD, deepfake detection, multi-task learning, segmentation @article{DeepFake2025, Deepfake detection is increasingly critical due to the rise of manipulated media. Existing methods often require extensive datasets and struggle with interpretability issues. To address these issues, this study introduces a novel one-class approach for detecting and localizing deepfake artifacts in videos, using authentic images to generate manipulated data for training. By integrating segmentation and leveraging convolutional neural networks with visual transformers, the method predicts both the presence and location of the generated manipulations. Experiments on seven deepfake datasets and emerging diffusion-based manipulations show that our approach consistently outperforms existing methods, demonstrating superior accuracy and localization capabilities. |
2024 |
Brodarič, Marko; Peer, Peter; Struc, Vitomir Towards Improving Backbones for Deepfake Detection Proceedings Article V: Proceedings of ERK 2024, str. 1-4, 2024. BibTeX | Oznake: CNN, deep learning, deepfake detection, deepfakes, media forensics, transformer @inproceedings{ERK_2024_Deepfakes, |
Brodarič, Marko; Peer, Peter; Štruc, Vitomir Cross-Dataset Deepfake Detection: Evaluating the Generalization Capabilities of Modern DeepFake Detectors Proceedings Article V: Proceedings of the 27th Computer Vision Winter Workshop (CVWW), str. 1-10, 2024. Povzetek | Povezava | BibTeX | Oznake: data integrity, deepfake, deepfake detection, deepfakes, difussion, face, faceforensics++, media forensics @inproceedings{MarkoCVWW, Due to the recent advances in generative deep learning, numerous techniques have been proposed in the literature that allow for the creation of so-called deepfakes, i.e., forged facial images commonly used for malicious purposes. These developments have triggered a need for effective deepfake detectors, capable of identifying forged and manipulated imagery as robustly as possible. While a considerable number of detection techniques has been proposed over the years, generalization across a wide spectrum of deepfake-generation techniques still remains an open problem. In this paper, we study a representative set of deepfake generation methods and analyze their performance in a cross-dataset setting with the goal of better understanding the reasons behind the observed generalization performance. To this end, we conduct a comprehensive analysis on the FaceForensics++ dataset and adopt Gradient-weighted Class Activation Mappings (Grad-CAM) to provide insights into the behavior of the evaluated detectors. Since a new class of deepfake generation techniques based on diffusion models recently appeared in the literature, we introduce a new subset of the FaceForensics++ dataset with diffusion-based deepfake and include it in our analysis. The results of our experiments show that most detectors overfit to the specific image artifacts induced by a given deepfake-generation model and mostly focus on local image areas where such artifacts can be expected. Conversely, good generalization appears to be correlated with class activations that cover a broad spatial area and hence capture different image artifacts that appear in various part of the facial region. |
Ivanovska, Marija; Štruc, Vitomir On the Vulnerability of Deepfake Detectors to Attacks Generated by Denoising Diffusion Models Proceedings Article V: Proceedings of WACV Workshops, str. 1051-1060, 2024. Povzetek | Povezava | BibTeX | Oznake: deep learning, deepfake, deepfake detection, diffusion models, face, media forensics @inproceedings{MarijaWACV24, The detection of malicious deepfakes is a constantly evolving problem that requires continuous monitoring of detectors to ensure they can detect image manipulations generated by the latest emerging models. In this paper, we investigate the vulnerability of single–image deepfake detectors to black–box attacks created by the newest generation of generative methods, namely Denoising Diffusion Models (DDMs). Our experiments are run on FaceForensics++, a widely used deepfake benchmark consisting of manipulated images generated with various techniques for face identity swapping and face reenactment. Attacks are crafted through guided reconstruction of existing deepfakes with a proposed DDM approach for face restoration. Our findings indicate that employing just a single denoising diffusion step in the reconstruction process of a deepfake can significantly reduce the likelihood of detection, all without introducing any perceptible image modifications. While training detectors using attack examples demonstrated some effectiveness, it was observed that discriminators trained on fully diffusion–based deepfakes exhibited limited generalizability when presented with our attacks. |
2023 |
Larue, Nicolas; Vu, Ngoc-Son; Štruc, Vitomir; Peer, Peter; Christophides, Vassilis SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for Exposing Deepfakes Proceedings Article V: Proceedings of the International Conference on Computer Vision (ICCV), str. 21011 - 21021, IEEE 2023. Povzetek | Povezava | BibTeX | Oznake: CNN, deepfake detection, deepfakes, face, media forensics, one-class learning, representation learning @inproceedings{NicolasCCV, Modern deepfake detectors have achieved encouraging results, when training and test images are drawn from the same data collection. However, when these detectors are applied to images produced with unknown deepfake-generation techniques, considerable performance degradations are commonly observed. In this paper, we propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task and generalizes better to unseen deepfakes. Specifically, SeeABLE first generates local image perturbations (referred to as soft-discrepancies) and then pushes the perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss. To strengthen the generalization performance of SeeABLE to unknown deepfake types, we generate a rich set of soft discrepancies and train the detector: (i) to localize, which part of the face was modified, and (ii) to identify the alteration type. To demonstrate the capabilities of SeeABLE, we perform rigorous experiments on several widely-used deepfake datasets and show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities. The source code for SeeABLE is available from: https://github.com/anonymous-author-sub/seeable. |
Peng, Bo; Sun, Xianyun; Wang, Caiyong; Wang, Wei; Dong, Jing; Sun, Zhenan; Zhang, Rongyu; Cong, Heng; Fu, Lingzhi; Wang, Hao; Zhang, Yusheng; Zhang, HanYuan; Zhang, Xin; Liu, Boyuan; Ling, Hefei; Dragar, Luka; Batagelj, Borut; Peer, Peter; Struc, Vitomir; Zhou, Xinghui; Liu, Kunlin; Feng, Weitao; Zhang, Weiming; Wang, Haitao; Diao, Wenxiu DFGC-VRA: DeepFake Game Competition on Visual Realism Assessment Proceedings Article V: IEEE International Joint Conference on Biometrics (IJCB 2023), str. 1-9, Ljubljana, Slovenia, 2023. Povzetek | Povezava | BibTeX | Oznake: competition IJCB, deepfake detection, deepfakes, face, realism assessment @inproceedings{Deepfake_comp2023, This paper presents the summary report on the DeepFake Game Competition on Visual Realism Assessment (DFGCVRA). Deep-learning based face-swap videos, also known as deepfakes, are becoming more and more realistic and deceiving. The malicious usage of these face-swap videos has caused wide concerns. There is a ongoing deepfake game between its creators and detectors, with the human in the loop. The research community has been focusing on the automatic detection of these fake videos, but the assessment of their visual realism, as perceived by human eyes, is still an unexplored dimension. Visual realism assessment, or VRA, is essential for assessing the potential impact that may be brought by a specific face-swap video, and it is also useful as a quality metric to compare different face-swap methods. This is the third edition of DFGC competitions, which focuses on the new visual realism assessment topic, different from previous ones that compete creators versus detectors. With this competition, we conduct a comprehensive study of the SOTA performance on the new task. We also release our MindSpore codes to fur- *Jing Dong (jdong@nlpr.ia.ac.cn) is the corresponding author. ther facilitate research in this field (https://github. com/bomb2peng/DFGC-VRA-benckmark). |
2021 |
Ivanovska, Marija; Štruc, Vitomir A Comparative Study on Discriminative and One--Class Learning Models for Deepfake Detection Proceedings Article V: Proceedings of ERK 2021, str. 1–4, 2021. Povzetek | Povezava | BibTeX | Oznake: biometrics, comparative study, computer vision, deepfake detection, deepfakes, detection, face, one-class learning @inproceedings{ERK_Marija_2021, Deepfakes or manipulated face images, where a donor's face is swapped with the face of a target person, have gained enormous popularity among the general public recently. With the advancements in artificial intelligence and generative modeling such images can nowadays be easily generated and used to spread misinformation and harm individuals, businesses or society. As the tools for generating deepfakes are rapidly improving, it is critical for deepfake detection models to be able to recognize advanced, sophisticated data manipulations, including those that have not been seen during training. In this paper, we explore the use of one--class learning models as an alternative to discriminative methods for the detection of deepfakes. We conduct a comparative study with three popular deepfake datasets and investigate the performance of selected (discriminative and one-class) detection models in matched- and cross-dataset experiments. Our results show that disciminative models significantly outperform one-class models when training and testing data come from the same dataset, but degrade considerably when the characteristics of the testing data deviate from the training setting. In such cases, one-class models tend to generalize much better. |
Objave
2025 |
Robust cross-dataset deepfake detection with multitask self-supervised learning Članek v strokovni reviji V: ICT Express, str. 1-5, 2025. |
2024 |
Towards Improving Backbones for Deepfake Detection Proceedings Article V: Proceedings of ERK 2024, str. 1-4, 2024. |
Cross-Dataset Deepfake Detection: Evaluating the Generalization Capabilities of Modern DeepFake Detectors Proceedings Article V: Proceedings of the 27th Computer Vision Winter Workshop (CVWW), str. 1-10, 2024. |
On the Vulnerability of Deepfake Detectors to Attacks Generated by Denoising Diffusion Models Proceedings Article V: Proceedings of WACV Workshops, str. 1051-1060, 2024. |
2023 |
SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for Exposing Deepfakes Proceedings Article V: Proceedings of the International Conference on Computer Vision (ICCV), str. 21011 - 21021, IEEE 2023. |
DFGC-VRA: DeepFake Game Competition on Visual Realism Assessment Proceedings Article V: IEEE International Joint Conference on Biometrics (IJCB 2023), str. 1-9, Ljubljana, Slovenia, 2023. |
2021 |
A Comparative Study on Discriminative and One--Class Learning Models for Deepfake Detection Proceedings Article V: Proceedings of ERK 2021, str. 1–4, 2021. |