haofanwang / awesome-conditional-content-generationLinks
Update-to-data resources for conditional content generation, including human motion generation, image or video generation and editing.
☆278Updated last year
Alternatives and similar repositories for awesome-conditional-content-generation
Users that are interested in awesome-conditional-content-generation are comparing it to the libraries listed below
Sorting:
- [ICCV 2023] The official implementation of paper "HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation"☆302Updated last year
- ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model☆369Updated last year
- 【Accepted by TPAMI】Human Motion Video Generation: A Survey (https://ieeexplore.ieee.org/document/11106267)☆256Updated last week
- [ICML 2024] 🍅HumanTOMATO: Text-aligned Whole-body Motion Generation☆346Updated last year
- This is the official repository for TalkSHOW: Generating Holistic 3D Human Motion from Speech [CVPR2023].☆357Updated last year
- [CVPR 2023] The official implementation of CVPR 2023 paper "Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial …☆267Updated 2 years ago
- [CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation☆254Updated 2 years ago
- [CVPR 2024 Highlight] Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis☆230Updated 8 months ago
- [CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model☆671Updated 2 years ago
- Code for CVPR 2022 paper "Bailando: 3D dance generation via Actor-Critic GPT with Choreographic Memory"☆417Updated last year
- Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"☆477Updated last year
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆84Updated last year
- Single Motion Diffusion Model☆402Updated 6 months ago
- [NeurIPS 2023] Official implementation of the paper "DreamWaltz: Make a Scene with Complex 3D Animatable Avatars".☆190Updated 11 months ago
- [CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.☆179Updated 3 weeks ago
- [NeurIPS 2023] Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs☆128Updated last year
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆304Updated last year
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆180Updated last year
- A collection of resources and papers on Motion Diffusion Models.☆48Updated 2 years ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆303Updated last year
- This is the codebase for SHOW in Generating Holistic 3D Human Motion from Speech [CVPR2023],☆235Updated last year
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆85Updated last year
- [ICCV 2023] TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration☆99Updated last year
- Official implementation of "TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts (ECCV2022…☆126Updated last year
- OmniControl: Control Any Joint at Any Time for Human Motion Generation, ICLR 2024☆353Updated last year
- ☆277Updated last year
- [ICCV 2021] The official repo for the paper "Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates".☆96Updated 2 years ago
- [NeurIPS D&B Track 2024] Official implementation of HumanVid☆332Updated 4 months ago
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆231Updated 2 years ago
- [NeurIPS 2023] FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing☆132Updated last year