Our paper titled On the Vulnerability of Deepfake Detectors to Attacks Generated by Denoising Diffusion Models has been presented at the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2024 as part of the Workshop on Manipulation, Adversarial and Presentation Attacks In Biometrics.
The paper explores how widely used Deepfake detection models can be deceived by black-box attacks created with Denoising Diffusion Models (DDMs). Furthermore, our investigation extends to examining the characteristics of these crafted attacks. Notably, we found that our attacks are challenging to detect, even when they are introduced to the deepfake detection system during its training phase.
The paper is available here.