haofanwang / awesome-conditional-content-generation
Update-to-data resources for conditional content generation, including human motion generation, image or video generation and editing.
☆263Updated 5 months ago
Alternatives and similar repositories for awesome-conditional-content-generation:
Users that are interested in awesome-conditional-content-generation are comparing it to the libraries listed below
- [ICCV 2023] The official implementation of paper "HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation"☆285Updated last year
- ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model☆341Updated 8 months ago
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆278Updated 7 months ago
- This is the official repository for TalkSHOW: Generating Holistic 3D Human Motion from Speech [CVPR2023].☆322Updated last year
- [CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.☆153Updated 6 months ago
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance☆186Updated 10 months ago
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆222Updated last year
- [ICML 2024] 🍅HumanTOMATO: Text-aligned Whole-body Motion Generation☆308Updated 6 months ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆291Updated last year
- [IJCV 2024] InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions☆239Updated 5 months ago
- [CVPR 2024 Highlight] Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis☆204Updated 9 months ago
- Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"☆426Updated last year
- Single Motion Diffusion Model☆355Updated 4 months ago
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆140Updated 8 months ago
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆205Updated last year
- [NeurIPS D&B Track 2024] Official implementation of HumanVid☆276Updated 2 weeks ago
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆391Updated 6 months ago
- Large Motion Model for Unified Multi-Modal Motion Generation☆239Updated 3 weeks ago
- [NeurIPS 2023] Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs☆123Updated last year
- LVDM: Latent Video Diffusion Models for High-Fidelity Long Video Generation☆464Updated 2 months ago
- [CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation☆241Updated last year
- ☆75Updated 6 months ago
- [NeurIPS 2023] Official implementation of the paper "DreamWaltz: Make a Scene with Complex 3D Animatable Avatars".☆184Updated 3 months ago
- UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing☆101Updated 2 months ago
- [ICCV23] AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control☆185Updated last year
- [3DV 2024] Official Repository for "TADA! Text to Animatable Digital Avatars".☆292Updated 7 months ago
- VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)☆186Updated 9 months ago
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆140Updated 3 months ago
- [ ECCV 2024 ] MotionLCM: This repo is the official implementation of "MotionLCM: Real-time Controllable Motion Generation via Latent Cons…☆293Updated last week
- OmniControl: Control Any Joint at Any Time for Human Motion Generation, ICLR 2024☆282Updated 7 months ago