DeepFake detection using anomaly detection methods (DeepFake DAD)


Advances in artificial intelligence and deep learning, especially in the fields of computer vision and generative models, have made manipulating images and video footage in a photo-realistic manner increasingly accessible, lowering the skill ceiling and the time investment needed to produce visually convincing tampered (fake) footage or imagery. This has resulted in the rise of so-called deepfakes, machine-learning methods specifically designed to produce fake footage en masse.

To prevent such illicit activities enabled by deepfake technologies, it is paramount to have highly automated and reliable means of detecting deepfakes at one’s disposal. Such detection technology not only enables efficient (large-scale) screening of image and video content but also allows non-experts to identify whether a given video or image is real or manipulated. Within the research project Deepfake detection using anomaly detection methods (DeepFake DAD) we aim to address this issue and conduct research on fundamentally novel methods for deepfake detection that address the deficiencies of current solutions in this problem domain. Existing deepfake detectors rely on (semi-)handcrafted features that have been shown to work against a predefined set of publicly available/known deepfake generation methods. However, detection techniques developed in this manner are vulnerable (i.e., unable to detect) to unknown or unseen (future) deepfake generation methods. The goal of DeepFake DAD is, therefore, to develop detection models that can be trained in a semi-supervised or unsupervised manner without relying on training samples from publicly available deepfake generation techniques, i.e., within so-called anomaly detection frameworks trainable in a one-class learning regime.

The expected main tangible result of the research project are highly robust deepfake detection solutions that outperform the current state-of-the-art in terms of generalization capabilities and can assist end-users and platform providers to automatically detect tampered imagery and video, allowing them to act accordingly and avoid the negative personal, societal, economic, and political implications of widespread, undetectable fake footage.

DeepFake DAD (ARIS: J2-50065) is a fundamental research project funded by the Slovenian Research and Innovation Agency (ARIS) in the period: 1.10.2023 – 30.9.2026 (1,51 FTE per year).

The Principal Investigator (PI) of DeepFake DAD is Prof. Vitomir Štruc, PhD.

Link to SICRIS: TBA.

Project overview

DeepFake DAD is structured into 6 work packages:

  • WP1: Coordination and project management
  • WP2: Supervised models and data representations
  • WP3: From supervised to semi-supervised models
  • WP4: Unsupervised DeepFake detection
  • WP5: Demonstration and exploitation
  • WP6: Dissemination

The R&D work on these work packages is expected to result in:

  • The development of discriminative data representations
  • Semi-supervised deepfake detectors
  • Unsupervised and one-class detection models.

Project phases

  • Year 1: Activities on work packages WP1, WP2, WP6
  • Year 2: Activities on work packages WP1, WP2, WP3, WP4, WP6
  • Year 3: Activities on work packages WP1, WP3, WP4, WP5, WP6


DeepFake DAD is conducted jointly by:

Participating researchers

International Advisory Committee

Project publications

Journal Publications

  • Marija Ivanovska; Vitomir Štruc, Y-GAN: Learning Dual Data Representations for Anomaly Detection in Images, Expert Systems with Applications (SCI IF = 8.5), 2024.

Conference Publications

  • Marija Ivanovska, Vitomir Štruc, On the Vulnerability of Deepfake Detectors to Attacks Generated by Denoising Diffusion Models, Proceedings of WACV Workshops, pp. 1051-1060, 2024 [PDF]
  • Marko Brodarič; Peter Peer; Vitomir Štruc, Cross-Dataset Deepfake Detection: Evaluating the Generalization Capabilities of Modern DeepFake Detectors, In: Proceedings of the 27th Computer Vision Winter Workshop (CVWW), pp. 1-10, 2024 [PDF].

Funding agency