haofanwang / awesome-conditional-content-generationLinks
Update-to-data resources for conditional content generation, including human motion generation, image or video generation and editing.
☆282Updated last year
Alternatives and similar repositories for awesome-conditional-content-generation
Users that are interested in awesome-conditional-content-generation are comparing it to the libraries listed below
Sorting:
- [ICCV 2023] The official implementation of paper "HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation"☆306Updated 2 years ago
- ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model☆372Updated last year
- 【Accepted by TPAMI】Human Motion Video Generation: A Survey (https://ieeexplore.ieee.org/document/11106267)☆293Updated last week
- [CVPR 2023] The official implementation of CVPR 2023 paper "Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial …☆276Updated 2 years ago
- This is the official repository for TalkSHOW: Generating Holistic 3D Human Motion from Speech [CVPR2023].☆366Updated 2 years ago
- [ICML 2024] 🍅HumanTOMATO: Text-aligned Whole-body Motion Generation☆362Updated last year
- [CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation☆259Updated 2 years ago
- [CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model☆716Updated 2 years ago
- Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"☆490Updated 2 years ago
- Code for CVPR 2022 paper "Bailando: 3D dance generation via Actor-Critic GPT with Choreographic Memory"☆427Updated 2 years ago
- [CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.☆186Updated 5 months ago
- [CVPR 2024 Highlight] Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis☆238Updated last year
- A collection of resources and papers on Motion Diffusion Models.☆49Updated 2 years ago
- [NeurIPS 2023] Official implementation of the paper "DreamWaltz: Make a Scene with Complex 3D Animatable Avatars".☆191Updated last year
- Official implementation of "TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts (ECCV2022…☆134Updated last year
- This is the codebase for SHOW in Generating Holistic 3D Human Motion from Speech [CVPR2023],☆239Updated last year
- The official implementation of the paper "Human Motion Diffusion as a Generative Prior"☆512Updated last year
- [NeurIPS 2023] Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs☆128Updated 2 years ago
- [ICCV 2023] TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration☆102Updated last year
- [ICCV 2021] The official repo for the paper "Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates".☆97Updated 2 years ago
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆311Updated last year
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆87Updated last year
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆86Updated last year
- Single Motion Diffusion Model☆410Updated 10 months ago
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆191Updated last year
- The official PyTorch implementation of the paper "MotionGPT: Finetuned LLMs are General-Purpose Motion Generators"☆238Updated 2 years ago
- ☆144Updated last year
- [TMM 2023] Language-Guided Face Animation by Recurrent StyleGAN-based Generator☆11Updated 2 years ago
- [NeurIPS 2023] FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing☆133Updated 2 years ago
- [CVPR 2024] AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion☆133Updated last year