haofanwang / awesome-conditional-content-generationLinks
Update-to-data resources for conditional content generation, including human motion generation, image or video generation and editing.
☆278Updated last year
Alternatives and similar repositories for awesome-conditional-content-generation
Users that are interested in awesome-conditional-content-generation are comparing it to the libraries listed below
Sorting:
- [ICCV 2023] The official implementation of paper "HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation"☆303Updated 2 years ago
- ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model☆368Updated last year
- This is the official repository for TalkSHOW: Generating Holistic 3D Human Motion from Speech [CVPR2023].☆360Updated last year
- [ICML 2024] 🍅HumanTOMATO: Text-aligned Whole-body Motion Generation☆346Updated last year
- [CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation☆254Updated 2 years ago
- 【Accepted by TPAMI】Human Motion Video Generation: A Survey (https://ieeexplore.ieee.org/document/11106267)☆262Updated this week
- [NeurIPS 2023] Official implementation of the paper "DreamWaltz: Make a Scene with Complex 3D Animatable Avatars".☆189Updated last year
- Single Motion Diffusion Model☆401Updated 6 months ago
- [CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.☆180Updated last month
- This is the codebase for SHOW in Generating Holistic 3D Human Motion from Speech [CVPR2023],☆236Updated last year
- [CVPR 2023] The official implementation of CVPR 2023 paper "Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial …☆268Updated 2 years ago
- [CVPR 2024 Highlight] Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis☆231Updated 9 months ago
- [TMM 2023] Language-Guided Face Animation by Recurrent StyleGAN-based Generator☆11Updated 2 years ago
- Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"☆478Updated last year
- [ICCV 2023] TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration☆99Updated last year
- [CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model☆679Updated 2 years ago
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆183Updated last year
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆84Updated last year
- A collection of resources and papers on Motion Diffusion Models.☆48Updated 2 years ago
- Code for CVPR 2022 paper "Bailando: 3D dance generation via Actor-Critic GPT with Choreographic Memory"☆417Updated last year
- [NeurIPS D&B Track 2024] Official implementation of HumanVid☆333Updated last week
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance☆194Updated last year
- [3DV 2024] Official Repository for "TADA! Text to Animatable Digital Avatars".☆304Updated 6 months ago
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆305Updated last year
- The pytorch implementation of our CVPR 2023 paper "Conditional Image-to-Video Generation with Latent Flow Diffusion Models"☆465Updated last year
- [NeurIPS 2023] Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs☆128Updated last year
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆85Updated last year
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆230Updated 2 years ago
- [Arxiv-2024] MotionLLM: Understanding Human Behaviors from Human Motions and Videos☆357Updated last year
- Official implementation of "TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts (ECCV2022…☆127Updated last year