zyhbili / LivelySpeakerLinks
[ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".
☆85Updated last year
Alternatives and similar repositories for LivelySpeaker
Users that are interested in LivelySpeaker are comparing it to the libraries listed below
Sorting:
- QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation (CVPR 2023 Highlight)☆90Updated last year
- [CVPR 2024] AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion☆127Updated 11 months ago
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆57Updated last year
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆169Updated last year
- [CVPR 2022] Code for "Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation"☆143Updated 2 years ago
- UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons (ACM MM 2023 Oral)☆52Updated last year
- DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ ent…☆189Updated last year
- This is the codebase for SHOW in Generating Holistic 3D Human Motion from Speech [CVPR2023],☆233Updated 10 months ago
- [AAAI2025] Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"☆100Updated 6 months ago
- ☆42Updated last month
- Code for "Audio-Driven Co-Speech Gesture Video Generation" (NeurIPS 2022, Spotlight Presentation).☆87Updated 2 years ago
- ☆109Updated 5 months ago
- ☆213Updated 5 months ago
- [ICCV 2023] TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration☆98Updated last year
- This is the official repository for TalkSHOW: Generating Holistic 3D Human Motion from Speech [CVPR2023].☆346Updated last year
- [ICCV 2021] The official repo for the paper "Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates".☆95Updated 2 years ago
- [CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation☆251Updated last year
- ☆58Updated last year
- [CVPR 2024] Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model☆73Updated 9 months ago
- Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)☆67Updated 2 months ago
- Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)☆116Updated 11 months ago
- Code for CVPR 2024 paper: ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis☆33Updated 3 months ago
- 4D Facial Expression Diffusion Model☆72Updated last year
- [ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation☆58Updated 3 months ago
- ICCV 2025☆43Updated last month
- Code to reproduce the results for our SIGGRAPH 2023 paper "Listen Denoise Action"☆175Updated last year
- AIOZ-GDANCE: a large-scale dataset & baseline for music-driven group dance generation. (CVPR 2023)☆93Updated last year
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆26Updated last year
- This is official inference code of PD-FGC☆93Updated last year
- Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.☆138Updated last year