m-hamza-mughal / convofusionLinks
Code for CVPR 2024 paper: ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis
☆35Updated 7 months ago
Alternatives and similar repositories for convofusion
Users that are interested in convofusion are comparing it to the libraries listed below
Sorting:
- ICCV 2025☆62Updated 3 months ago
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆86Updated last year
- [CVPR 2024] AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion☆132Updated last year
- [CVPR 2024] Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model☆79Updated last year
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆58Updated last year
- [AAAI2025] Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"☆119Updated 11 months ago
- UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons (ACM MM 2023 Oral)☆54Updated last year
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆30Updated last year
- Official Implement MCM: Multi-condition Motion Synthesis Framework☆20Updated last year
- Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)☆72Updated 6 months ago
- DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ ent…☆201Updated last month
- A pytorch implementation of paper "Motion Flow Matching for Human Motion Synthesis and Editing"☆44Updated last year
- The official pytorch code for Expressive 3D Facial Animation Generation Based on Local-to-global Latent Diffusion☆32Updated last year
- ☆48Updated 11 months ago
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆191Updated last year
- [ICCV 2023] TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration☆102Updated last year
- ☆21Updated last year
- ☆118Updated 10 months ago
- ☆12Updated 3 years ago
- ☆56Updated last year
- Official repository for "MMM: Generative Masked Motion Model" (CVPR 2024 -- Highlight)☆124Updated 5 months ago
- [CVPR 2024] POPDG: Popular 3D Dance Generation with PopDanceSet☆57Updated 6 months ago
- ☆45Updated 6 months ago
- Code for paper "Learning Semantic Latent Directions for Accurate and Controllable Human Motion Prediction" (ECCV 2024)☆30Updated last year
- [CVPR 2025] SALAD: Skeleton-aware Latent Diffusion for Text-driven Motion Generation and Editing☆87Updated 6 months ago
- ☆58Updated 2 years ago
- Official code release of "DisCoRD: Discrete Tokens to Continuous Motion via Rectified Flow Decoding" [ICCV2025 Highlight]☆44Updated 3 months ago
- Light-T2M: A Lightweight and Fast Model for Text-to-motion Generation (AAAI 2025)☆36Updated 9 months ago
- QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation (CVPR 2023 Highlight)☆102Updated 2 years ago
- [CVPR 2022] Code for "Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation"☆143Updated 2 years ago