kaist-ami / 3d-talking-head-av-guidanceLinks
[INTERSPEECH'24] Official repository for "Enhancing Speech-Driven 3D Facial Animation with Audio-Visual Guidance from Lip Reading Expert"
☆17Updated 4 months ago
Alternatives and similar repositories for 3d-talking-head-av-guidance
Users that are interested in 3d-talking-head-av-guidance are comparing it to the libraries listed below
Sorting:
- ☆29Updated 4 months ago
- A novel apporach for personalized speech-driven 3D facial animation☆53Updated last year
- Official code release of "DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation" [AAAI2025]☆53Updated 8 months ago
- [ECCV2024 offical]KMTalk: Speech-Driven 3D Facial Animation with Key Motion Embedding☆34Updated last year
- Official implentation of SingingHead: A Large-scale 4D Dataset for Singing Head Animation. (TMM 25)☆60Updated 7 months ago
- Mapping Mediapipe's 52 blendshapes to FLAME's expression coefficients and poses.☆42Updated last month
- ARTalk generates realistic 3D head motions (lip sync, blinking, expressions, head poses) from audio in ⚡ real-time ⚡.☆95Updated 4 months ago
- ☆50Updated 3 months ago
- Processing monocular face videos☆14Updated 8 months ago
- ☆94Updated 3 months ago
- ☆33Updated last year
- ☆22Updated 3 months ago
- ☆85Updated last year
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆129Updated 4 months ago
- [NeurIPS 2024] Generalizable and Animatable Gaussian Head Avatar☆63Updated 7 months ago
- [ICLR 2024] Generalizable and Precise Head Avatar from Image(s)☆71Updated last year
- ☆32Updated last month
- [ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation☆62Updated 6 months ago
- [CVPR2025] KeyFace: Expressive Audio-Driven Facial Animation for Long Sequences via KeyFrame Interpolation☆64Updated 6 months ago
- NeurIPS 2022☆39Updated 2 years ago
- DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer☆163Updated last year
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆29Updated last year
- Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)☆71Updated 5 months ago
- Code for CVPR 2024 paper: ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis☆35Updated 6 months ago
- ☆102Updated last month
- Source code for: Expressive Speech-driven Facial Animation with controllable emotions☆39Updated last year
- ☆71Updated 2 years ago
- [CVPR 2022] Code for "Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation"☆143Updated 2 years ago
- ☆42Updated 4 months ago
- ☆21Updated 9 months ago