Ditzley / joint-gestures-and-faceLinks
Code for the paper "Joint Co-Speech Gesture and Expressive Talking Face Generation using Diffusion with Adapters"
☆24Updated 11 months ago
Alternatives and similar repositories for joint-gestures-and-face
Users that are interested in joint-gestures-and-face are comparing it to the libraries listed below
Sorting:
- ☆45Updated 5 months ago
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆58Updated last year
- [ICME 2025] DiffusionTalker: Efficient and Compact Speech-Driven 3D Talking Head via Personalizer-Guided Distillation☆22Updated 8 months ago
- [ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation☆62Updated 7 months ago
- [ECCV 2024] - ScanTalk: 3D Talking Heads from Unregistered Scans☆51Updated 8 months ago
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation (ACM MM 2024)☆34Updated 2 months ago
- ☆51Updated 5 months ago
- [AAAI 2024] stle2talker - Official PyTorch Implementation☆47Updated 4 months ago
- Official Access to ICIP2024 "THQA: A Perceptual Quality Assessment Database for Talking Heads"☆35Updated 4 months ago
- UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons (ACM MM 2023 Oral)☆54Updated last year
- [ICCV2025] SemTalk Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis☆36Updated 2 weeks ago
- [CVPR 2024] AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion☆132Updated last year
- ☆29Updated 5 months ago
- Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)☆72Updated 6 months ago
- [CVPR 2025] Official code for "Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation"☆64Updated 6 months ago
- ☆20Updated last year
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆86Updated last year
- Freetalker: Controllable Speech and Text-Driven Gesture Generation Based on Diffusion Models for Enhanced Speaker Naturalness (ICASSP 202…☆72Updated last year
- Official implentation of SingingHead: A Large-scale 4D Dataset for Singing Head Animation. (TMM 25)☆62Updated last week
- (AAAI2024) Controllable 3D Face Generation with Conditional Style Code Diffusion☆38Updated last year
- ☆34Updated 3 months ago
- [ICASSP'25] DEGSTalk: Decomposed Per-Embedding Gaussian Fields for Hair-Preserving Talking Face Synthesis☆52Updated last month
- ☆24Updated last year
- NeurIPS 2022☆39Updated 3 years ago
- [ICLR 2024] Generalizable and Precise Head Avatar from Image(s)☆70Updated last year
- Muti-human Interactive Talking Dataset☆59Updated 4 months ago
- [CVPR2025] KeyFace: Expressive Audio-Driven Facial Animation for Long Sequences via KeyFrame Interpolation☆68Updated 8 months ago
- LinguaLinker: Audio-Driven Portraits Animation with Implicit Facial Control Enhancement☆75Updated last year
- Official implementation of "MoST: Motion Style Transformer between Diverse Action Contents"☆36Updated last year
- ARTalk generates realistic 3D head motions (lip sync, blinking, expressions, head poses) from audio in ⚡ real-time ⚡.☆105Updated 6 months ago