haofanwang / awesome-conditional-content-generationLinks
Update-to-data resources for conditional content generation, including human motion generation, image or video generation and editing.
☆279Updated last year
Alternatives and similar repositories for awesome-conditional-content-generation
Users that are interested in awesome-conditional-content-generation are comparing it to the libraries listed below
Sorting:
- [ICCV 2023] The official implementation of paper "HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation"☆303Updated 2 years ago
- ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model☆368Updated last year
- [ICML 2024] 🍅HumanTOMATO: Text-aligned Whole-body Motion Generation☆351Updated last year
- This is the official repository for TalkSHOW: Generating Holistic 3D Human Motion from Speech [CVPR2023].☆361Updated 2 years ago
- [CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.☆183Updated 2 months ago
- [CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation☆255Updated 2 years ago
- 【Accepted by TPAMI】Human Motion Video Generation: A Survey (https://ieeexplore.ieee.org/document/11106267)☆267Updated last week
- [NeurIPS 2023] Official implementation of the paper "DreamWaltz: Make a Scene with Complex 3D Animatable Avatars".☆189Updated last year
- This is the codebase for SHOW in Generating Holistic 3D Human Motion from Speech [CVPR2023],☆236Updated last year
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆231Updated 2 years ago
- [CVPR 2023] The official implementation of CVPR 2023 paper "Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial …☆269Updated 2 years ago
- Single Motion Diffusion Model☆403Updated 7 months ago
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆86Updated last year
- A collection of resources and papers on Motion Diffusion Models.☆48Updated 2 years ago
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆84Updated last year
- ☆217Updated 8 months ago
- [NeurIPS 2023] Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs☆128Updated 2 years ago
- [CVPR 2024 Highlight] Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis☆232Updated 9 months ago
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆186Updated last year
- [CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model☆686Updated 2 years ago
- DanceCamera3D: 3D Camera Movement Synthesis with Music and Dance. [CVPR 2024] Official PyTorch implementation☆111Updated last year
- Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"☆478Updated last year
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆305Updated last year
- [NeurIPS D&B Track 2024] Official implementation of HumanVid☆337Updated last month
- Code for CVPR 2022 paper "Bailando: 3D dance generation via Actor-Critic GPT with Choreographic Memory"☆419Updated last year
- The official PyTorch implementation of the paper "MotionGPT: Finetuned LLMs are General-Purpose Motion Generators"☆235Updated last year
- [ICCV 2023] TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration☆101Updated last year
- [3DV 2024] Official Repository for "TADA! Text to Animatable Digital Avatars".☆305Updated 7 months ago
- The official implementation of the paper "Human Motion Diffusion as a Generative Prior"☆498Updated 9 months ago
- [NeurIPS 2023] FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing☆131Updated last year