We are happy to share that the paper titled “C-VTON: Context-Driven Image-Based Virtual Try-On Network” written by first time author Benjamin Fele was accepted for presentation at the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). In this work, we present a new deep learning model for virtual-try on that achieves state-of-the-art results on two challenging benchmarks. The paper is the result of research work conducted in the scope of the “DeepBeauty” project funded by ARRS.
Paper authors: Benjamin Fele, Ajda Lampe, Peter Peer and Vitomir Štruc
Paper preprint: PDF with Appendix (reduced image quality due to size).
Below are a few sample results generated with C-VTON.