Our GlassesGAN paper was accepted for presentation at CVPR 2023. In this work, Fullbright visiting student Richard Plesh (Clarkson University) proposes a new approach for face-image editing in the latent space of the StyleGANv2 model with an application in fashion.
Richard Plesh, Peter Peer and Vitomir Štruc Struc, GlassesGAN: Eyewear Personalization using Synthetic Appearance Discovery and Targeted Subspace Modeling, accepted for CVPR 2023.
Abstract: “We present GlassesGAN, a novel image editing framework for custom design of glasses, that sets a new standard in terms of image quality, edit realism, and continuous multi-style edit capability. To facilitate the editing process with GlassesGAN, we propose a Targeted Subspace Modelling (TSM) procedure that, based on a novel mechanism for (synthetic) appearance discovery in the latent space of a pre-trained GAN generator, constructs an eyeglasses-specific (latent) subspace that the editing framework can utilize. Additionally, we also introduce an appearance-constrained subspace initialization (SI) technique that centers the latent representation of the given input image in the well-defined part of the constructed subspace to improve the reliability of the learned edits. We test GlassesGAN on two (diverse) high-resolution datasets (CelebA-HQ and SiblingsDB-HQf) and compare it to three state-of-the-art competitors, i.e., InterfaceGAN, GANSpace, and MaskGAN. The reported results show that GlassesGAN convincingly outperforms all competing techniques, while offering additional functionality (e.g., fine-grained multi-style editing) not available with any of the competitors.”
Paper and code will be available shortly.