2025 |
Batagelj, Borut; Kronovšek, Andrej; Štruc, Vitomir; Peer, Peter Robust cross-dataset deepfake detection with multitask self-supervised learning Članek v strokovni reviji V: ICT Express, str. 1-5, 2025. Povzetek | Povezava | BibTeX | Oznake: deepfake, deepfake DAD, deepfake detection, multi-task learning, segmentation @article{DeepFake2025, Deepfake detection is increasingly critical due to the rise of manipulated media. Existing methods often require extensive datasets and struggle with interpretability issues. To address these issues, this study introduces a novel one-class approach for detecting and localizing deepfake artifacts in videos, using authentic images to generate manipulated data for training. By integrating segmentation and leveraging convolutional neural networks with visual transformers, the method predicts both the presence and location of the generated manipulations. Experiments on seven deepfake datasets and emerging diffusion-based manipulations show that our approach consistently outperforms existing methods, demonstrating superior accuracy and localization capabilities. |
2024 |
Rot, Peter; Terhorst, Philipp; Peer, Peter; Štruc, Vitomir ASPECD: Adaptable Soft-Biometric Privacy-Enhancement Using Centroid Decoding for Face Verification Proceedings Article V: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG), str. 1-9, 2024. Povzetek | Povezava | BibTeX | Oznake: deepfake, deepfakes, face, face analysis, face deidentification, face image processing, face images, face synthesis, face verification, privacy, privacy enhancement, privacy protection, privacy-enhancing techniques, soft biometric privacy, soft biometrics @inproceedings{Rot_FG2024, State-of-the-art face recognition models commonly extract information-rich biometric templates from the input images that are then used for comparison purposes and identity inference. While these templates encode identity information in a highly discriminative manner, they typically also capture other potentially sensitive facial attributes, such as age, gender or ethnicity. To address this issue, Soft-Biometric Privacy-Enhancing Techniques (SB-PETs) were proposed in the literature that aim to suppress such attribute information, and, in turn, alleviate the privacy risks associated with the extracted biometric templates. While various SB-PETs were presented so far, existing approaches do not provide dedicated mechanisms to determine which soft-biometrics to exclude and which to retain. In this paper, we address this gap and introduce ASPECD, a modular framework designed to selectively suppress binary and categorical soft-biometrics based on users' privacy preferences. ASPECD consists of multiple sequentially connected components, each dedicated for privacy-enhancement of an individual soft-biometric attribute. The proposed framework suppresses attribute information using a Moment-based Disentanglement process coupled with a centroid decoding procedure, ensuring that the privacy-enhanced templates are directly comparable to the templates in the original embedding space, regardless of the soft-biometric modality being suppressed. To validate the performance of ASPECD, we conduct experiments on a large-scale face dataset and with five state-of-the-art face recognition models, demonstrating the effectiveness of the proposed approach in suppressing single and multiple soft-biometric attributes. Our approach achieves a competitive privacy-utility trade-off compared to the state-of-the-art methods in scenarios that involve enhancing privacy w.r.t. gender and ethnicity attributes. Source code will be made publicly available. |
Brodarič, Marko; Peer, Peter; Štruc, Vitomir Cross-Dataset Deepfake Detection: Evaluating the Generalization Capabilities of Modern DeepFake Detectors Proceedings Article V: Proceedings of the 27th Computer Vision Winter Workshop (CVWW), str. 1-10, 2024. Povzetek | Povezava | BibTeX | Oznake: data integrity, deepfake, deepfake detection, deepfakes, difussion, face, faceforensics++, media forensics @inproceedings{MarkoCVWW, Due to the recent advances in generative deep learning, numerous techniques have been proposed in the literature that allow for the creation of so-called deepfakes, i.e., forged facial images commonly used for malicious purposes. These developments have triggered a need for effective deepfake detectors, capable of identifying forged and manipulated imagery as robustly as possible. While a considerable number of detection techniques has been proposed over the years, generalization across a wide spectrum of deepfake-generation techniques still remains an open problem. In this paper, we study a representative set of deepfake generation methods and analyze their performance in a cross-dataset setting with the goal of better understanding the reasons behind the observed generalization performance. To this end, we conduct a comprehensive analysis on the FaceForensics++ dataset and adopt Gradient-weighted Class Activation Mappings (Grad-CAM) to provide insights into the behavior of the evaluated detectors. Since a new class of deepfake generation techniques based on diffusion models recently appeared in the literature, we introduce a new subset of the FaceForensics++ dataset with diffusion-based deepfake and include it in our analysis. The results of our experiments show that most detectors overfit to the specific image artifacts induced by a given deepfake-generation model and mostly focus on local image areas where such artifacts can be expected. Conversely, good generalization appears to be correlated with class activations that cover a broad spatial area and hence capture different image artifacts that appear in various part of the facial region. |
Ivanovska, Marija; Štruc, Vitomir On the Vulnerability of Deepfake Detectors to Attacks Generated by Denoising Diffusion Models Proceedings Article V: Proceedings of WACV Workshops, str. 1051-1060, 2024. Povzetek | Povezava | BibTeX | Oznake: deep learning, deepfake, deepfake detection, diffusion models, face, media forensics @inproceedings{MarijaWACV24, The detection of malicious deepfakes is a constantly evolving problem that requires continuous monitoring of detectors to ensure they can detect image manipulations generated by the latest emerging models. In this paper, we investigate the vulnerability of single–image deepfake detectors to black–box attacks created by the newest generation of generative methods, namely Denoising Diffusion Models (DDMs). Our experiments are run on FaceForensics++, a widely used deepfake benchmark consisting of manipulated images generated with various techniques for face identity swapping and face reenactment. Attacks are crafted through guided reconstruction of existing deepfakes with a proposed DDM approach for face restoration. Our findings indicate that employing just a single denoising diffusion step in the reconstruction process of a deepfake can significantly reduce the likelihood of detection, all without introducing any perceptible image modifications. While training detectors using attack examples demonstrated some effectiveness, it was observed that discriminators trained on fully diffusion–based deepfakes exhibited limited generalizability when presented with our attacks. |
Objave
2025 |
Robust cross-dataset deepfake detection with multitask self-supervised learning Članek v strokovni reviji V: ICT Express, str. 1-5, 2025. |
2024 |
ASPECD: Adaptable Soft-Biometric Privacy-Enhancement Using Centroid Decoding for Face Verification Proceedings Article V: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG), str. 1-9, 2024. |
Cross-Dataset Deepfake Detection: Evaluating the Generalization Capabilities of Modern DeepFake Detectors Proceedings Article V: Proceedings of the 27th Computer Vision Winter Workshop (CVWW), str. 1-10, 2024. |
On the Vulnerability of Deepfake Detectors to Attacks Generated by Denoising Diffusion Models Proceedings Article V: Proceedings of WACV Workshops, str. 1051-1060, 2024. |