junshutang / Make-It-VividLinks
[CVPR 2024] Make-It-Vivid: Dressing Your Animatable Biped Cartoon Characters from Text
☆69Updated last year
Alternatives and similar repositories for Make-It-Vivid
Users that are interested in Make-It-Vivid are comparing it to the libraries listed below
Sorting:
- [CVPR2024] DiffusionGAN3D: Boosting Text-guided 3D Generation and Domain Adaptation by Combining 3D GANs and Diffusion Priors.☆70Updated last year
- ☆43Updated 11 months ago
- [CVPR 24] DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models☆92Updated last year
- [SIGGRAPH Asia 2024] I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models☆64Updated 3 weeks ago
- ObjCtrl-2.5D☆47Updated 3 months ago
- AniCrafter: Customizing Realistic Human-Centric Animation via Avatar-Background Conditioning in Video Diffusion Models☆55Updated last week
- A pytorch implementation of “X-Dreamer: Creating High-quality 3D Content by Bridging the Domain Gap Between Text-to-2D and Text-to-3D Gen…☆74Updated last year
- ☆80Updated 5 months ago
- ☆64Updated last year
- [NeurIPS 2023] PrimDiffusion: Volumetric Primitives Diffusion for 3D Human Generation☆118Updated last year
- Code for "GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image", CVPR 2024☆93Updated last year
- ID-Sculpt [AAAI 2025]☆70Updated 3 months ago
- [ECCV 2024] IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation☆55Updated 9 months ago
- Implementation of paper: Flux Already Knows – Activating Subject-Driven Image Generation without Training☆41Updated last month
- HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.☆86Updated last week
- ☆20Updated 11 months ago
- CustomDiffusion360: Customizing Text-to-Image Diffusion with Camera Viewpoint Control☆170Updated 7 months ago
- [SIGGRAPH'24] Official code of HeadArtist: Text-conditioned 3D Head Generation with Self Score Distillation☆71Updated 10 months ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆92Updated 9 months ago
- Official implementation of SyncDiffusion.☆166Updated last year
- Official code release of our paper "Shape-for-Motion: Precise and Consistent Video Editing with 3D Proxy"☆45Updated last week
- [CVPR 2025] Official code for "Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation"☆56Updated last month
- [CVPR 2024] Official pytorch implementation of "HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D"☆132Updated last year
- [CVPR 2023] Learning 3D-aware Image Synthesis with Unknown Pose Distribution☆57Updated 2 years ago
- Official implementation of `AToM: Amortized Text-to-Mesh using 2D Diffusion`☆85Updated last year
- Code Repository for Paper "SphereHead: Stable 3D Full-head Synthesis with Spherical Tri-plane Representation"☆96Updated 3 months ago
- Code for MagicPose4D: Crafting Articulated Models with Appearance and Motion Control☆102Updated 9 months ago
- Code for Ray Conditioning☆29Updated last year
- [ECCV 2024] Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation☆125Updated last year
- [CVPR 2024] "Taming Mode Collapse in Score Distillation for Text-to-3D Generation" by Peihao Wang, Dejia Xu, Zhiwen Fan, Dilin Wang, Srey…☆49Updated last year