kunheek / style-aware-discriminatorView external linksLinks
CVPR 2022 - Official PyTorch implementation of "A Style-Aware Discriminator for Controllable Image Translation"
☆115Oct 28, 2025Updated 3 months ago
Alternatives and similar repositories for style-aware-discriminator
Users that are interested in style-aware-discriminator are comparing it to the libraries listed below
Sorting:
- Maximum Spatial Perturbation for Image-to-Image Translation (Official Implementation)☆62Jul 3, 2022Updated 3 years ago
- Official implementation for "QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation" (CVPR 2022)☆84Jan 13, 2023Updated 3 years ago
- [CVPR 2022] GAN inversion and editing with spatially-adaptive multiple latent layers☆174Jan 21, 2023Updated 3 years ago
- The official repository of "Encode-in-Style: Latent-based Video Encoding using StyleGAN2"☆47Feb 15, 2023Updated 3 years ago
- ☆32Nov 23, 2021Updated 4 years ago
- CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.☆186May 25, 2021Updated 4 years ago
- [CVPR 2022] Unsupervised Image-to-Image Translation with Generative Prior☆197Jul 23, 2023Updated 2 years ago
- Continuous and Diverse Image-to-Image Translation via Signed Attribute Vectors (IJCV2022)☆45Jul 6, 2022Updated 3 years ago
- Official pytorch implementation of the IrwGAN for unaligned image-to-image translation☆34Dec 15, 2021Updated 4 years ago
- Few-shot image translation method working on unstructured environments. ECCV 2022☆47Dec 16, 2022Updated 3 years ago
- Vector Quantized Image-to-Image Translation (ECCV 2022)☆79Nov 28, 2022Updated 3 years ago
- HyperInverter: Improving StyleGAN Inversion via Hypernetwork (CVPR 2022)☆119Nov 12, 2024Updated last year
- [NeurIPS 2022, T-PAMI 2023] Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models☆268Mar 18, 2024Updated last year
- Official repo of Text-Free Learning of a Natural Language Interface for Pretrained Face Generators☆66Dec 13, 2023Updated 2 years ago
- PITI: Pretraining is All You Need for Image-to-Image Translation☆501Jun 2, 2024Updated last year
- [ECCV 2022] Official Pytorch implementation of "Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable Portrait Imag…☆126Feb 4, 2023Updated 3 years ago
- Domain Expansion of Image Generators - CVPR23☆88Apr 9, 2023Updated 2 years ago
- Artstation-Artistic-face-HQ Dataset (AAHQ)☆135Dec 3, 2021Updated 4 years ago
- Official PyTorch repo for StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN.☆379Nov 20, 2021Updated 4 years ago
- Official pytorch implementation of StyleMapGAN (CVPR 2021)☆464Oct 15, 2021Updated 4 years ago
- Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"☆41Sep 23, 2021Updated 4 years ago
- Improved StyleGAN Embedding: Where are the Good Latents?☆121Apr 2, 2022Updated 3 years ago
- ☆62Nov 17, 2022Updated 3 years ago
- Code for the ICME 2021 paper "SAFIN: Arbitrary Style Transfer With Self-Attentive Factorized Instance Normalization"☆34Jan 15, 2024Updated 2 years ago
- [TIP 2022] E2Style: Improve the Efficiency and Effectiveness of StyleGAN Inversion☆151Oct 12, 2023Updated 2 years ago
- [NeurIPS 2021] Low-Rank Subspaces in GANs☆126Dec 24, 2022Updated 3 years ago
- Cross-Domain and Disentangled Face Manipulation with 3D Guidance☆108Jul 18, 2022Updated 3 years ago
- Official Pytorch implementation of GGDR (ECCV 2022)☆102Aug 10, 2022Updated 3 years ago
- Unsupervised image-to-image translation method via pre-trained StyleGAN2 network☆225Nov 23, 2020Updated 5 years ago
- Collection of awesome resources on image-to-image translation.☆1,232Sep 20, 2025Updated 4 months ago
- [IJCAI 2021] Disentangled Face Attribute Editing via Instance-Aware Latent Space Search☆74Jun 3, 2023Updated 2 years ago
- Official Implementation of "Third Time's the Charm? Image and Video Editing with StyleGAN3" (AIM ECCVW 2022) https://arxiv.org/abs/2201.1…☆686Oct 6, 2022Updated 3 years ago
- Code for the paper High Fidelity Image Synthesis With Deep VAEs In Latent Space.