m-hamza-mughal / convofusionLinks
Code for CVPR 2024 paper: ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis
☆35Updated 6 months ago
Alternatives and similar repositories for convofusion
Users that are interested in convofusion are comparing it to the libraries listed below
Sorting:
- ICCV 2025☆55Updated 2 months ago
- [CVPR 2024] Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model☆77Updated last year
- [CVPR 2024] AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion☆132Updated last year
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆58Updated last year
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆86Updated last year
- [AAAI2025] Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"☆112Updated 9 months ago
- Official Implement MCM: Multi-condition Motion Synthesis Framework☆21Updated 11 months ago
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆29Updated last year
- The official pytorch code for Expressive 3D Facial Animation Generation Based on Local-to-global Latent Diffusion☆29Updated last year
- UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons (ACM MM 2023 Oral)☆54Updated last year
- Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)