OpenGVLab / Awesome-DragGANLinks
Awesome-DragGAN: A curated list of papers, tutorials, repositories related to DragGAN
☆84Updated 2 years ago
Alternatives and similar repositories for Awesome-DragGAN
Users that are interested in Awesome-DragGAN are comparing it to the libraries listed below
Sorting:
- ☆64Updated 2 years ago
- DeepFloyd-IF-powered implementation of DreamFusion☆70Updated 2 years ago
- ☆65Updated last year
- [ICCV 2025] TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation☆33Updated 11 months ago
- Navigate dreamscapes with a click – your chosen point guides the drone’s flight in a thrilling visual journey.☆47Updated 2 months ago
- [3DV 2025] Learning Naturally Aggregated Appearance for Efficient 3D Editing☆33Updated 9 months ago
- (Siggraph Asia 2023) Project Page of "HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image"☆10Updated last year
- 🏞️ Official implementation of "Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition"☆109Updated last year
- ☆24Updated last year
- ☆42Updated 2 years ago
- Code for paper Background Prompting for Improved Object Depth☆29Updated 2 years ago
- A one-stop library to standardize the inference and evaluation of all the conditional video generation models.☆50Updated 9 months ago
- ObjCtrl-2.5D☆57Updated 7 months ago
- Diffusion Models as Data Mining Tools☆54Updated 6 months ago
- ☆26Updated 8 months ago
- Breathing New Life into 3D Assets with Generative Repainting☆99Updated 2 years ago
- ☆86Updated last year
- ☆55Updated last year
- ☆22Updated 11 months ago
- HOSNeRF: Dynamic Human-Object-Scene Neural Radiance Fields from a Single Video☆68Updated last year
- From Geometry to Texture: A Hierarchical Framework for Efficient Text-to-3D Generation☆33Updated 2 years ago
- ☆92Updated 2 years ago
- Implementation of Dreamcraft3D, 3D content generation in Pytorch☆81Updated 2 years ago
- ☆20Updated 2 years ago
- 🤗 Unofficial huggingface/diffusers-based implementation of the paper "Training-Free Layout Control with Cross-Attention Guidance".☆42Updated 2 years ago
- [ICLR 2024] Contextualized Diffusion Models for Text-Guided Image and Video Generation☆70Updated last year
- A curated list of papers and resources for text-to-image evaluation.☆30Updated 2 years ago
- ☆62Updated 2 years ago
- An interactive demo based on Segment-Anything for stroke-based painting which enables human-like painting.☆35Updated 2 years ago
- Official Implementation of ICCV 2023 paper "StyleInV: A Temporal Style Modulated Inversion Network for Unconditional Video Generation"☆23Updated last year