XingqunQi-lab / EmotionGesturesLinks
Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"
☆26Updated last year
Alternatives and similar repositories for EmotionGestures
Users that are interested in EmotionGestures are comparing it to the libraries listed below
Sorting:
- UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons (ACM MM 2023 Oral)☆51Updated last year
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆57Updated last year
- ☆41Updated 3 weeks ago
- Code for CVPR 2024 paper: ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis☆33Updated 2 months ago
- ☆18Updated 11 months ago
- QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation (CVPR 2023 Highlight)☆90Updated last year
- Freetalker: Controllable Speech and Text-Driven Gesture Generation Based on Diffusion Models for Enhanced Speaker Naturalness (ICASSP 202…☆70Updated last year
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆85Updated last year
- PATS Dataset. Aligned Pose-Audio-Transcripts and Style for co-speech gesture research☆61Updated 2 years ago
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation (ACM MM 2024)☆29Updated 3 months ago
- [CVPR 2024] AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion☆126Updated 10 months ago
- ICCV 2025☆40Updated 2 weeks ago
- The official pytorch code for TalkingStyle: Personalized Speech-Driven Facial Animation with Style Preservation☆26Updated last year
- ☆18Updated 10 months ago
- Official Implementation of AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech Gesture Synthesis with the extension (…☆21Updated last year
- Official implentation of SingingHead: A Large-scale 4D Dataset for Singing Head Animation. (TMM 25)☆58Updated 3 months ago
- Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)☆66Updated last month
- Code for the paper "Joint Co-Speech Gesture and Expressive Talking Face Generation using Diffusion with Adapters"☆21Updated 6 months ago
- [AAAI 2024] SAAS - Official PyTorch Implementation☆10Updated last year
- Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)☆116Updated 10 months ago
- [ICME2025] DiffusionTalker: Efficient and Compact Speech-Driven 3D Talking Head via Personalizer-Guided Distillation☆18Updated 3 months ago
- A novel apporach for personalized speech-driven 3D facial animation☆51Updated last year
- ☆59Updated last year
- [CVPR 2022] Code for "Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation"☆141Updated 2 years ago
- Code for "Audio-Driven Co-Speech Gesture Video Generation" (NeurIPS 2022, Spotlight Presentation).☆87Updated 2 years ago
- [INTERSPEECH'24] Official repository for "Enhancing Speech-Driven 3D Facial Animation with Audio-Visual Guidance from Lip Reading Expert"☆16Updated 3 weeks ago
- [AAAI2025] Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"☆99Updated 5 months ago
- Official implementation of "MoST: Motion Style Transformer between Diverse Action Contents"☆33Updated last year
- [ECCV2024 offical]KMTalk: Speech-Driven 3D Facial Animation with Key Motion Embedding☆32Updated last year
- ☆10Updated 9 months ago