Ditzley / joint-gestures-and-faceLinks
Code for the paper "Joint Co-Speech Gesture and Expressive Talking Face Generation using Diffusion with Adapters"
☆24Updated 10 months ago
Alternatives and similar repositories for joint-gestures-and-face
Users that are interested in joint-gestures-and-face are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] - ScanTalk: 3D Talking Heads from Unregistered Scans☆51Updated 8 months ago
- ☆45Updated 5 months ago
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆58Updated last year
- [ICME 2025] DiffusionTalker: Efficient and Compact Speech-Driven 3D Talking Head via Personalizer-Guided Distillation☆22Updated 8 months ago
- [ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation☆62Updated 7 months ago
- [CVPR 2025] Official code for "Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation"☆64Updated 5 months ago
- ☆51Updated 4 months ago
- [AAAI 2024] stle2talker - Official PyTorch Implementation☆46Updated 3 months ago
- ☆24Updated 11 months ago
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation (ACM MM 2024)☆33Updated last month
- [ICASSP'25] DEGSTalk: Decomposed Per-Embedding Gaussian Fields for Hair-Preserving Talking Face Synthesis☆52Updated last month
- ☆20Updated last year
- LinguaLinker: Audio-Driven Portraits Animation with Implicit Facial Control Enhancement☆75Updated last year
- Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)☆72Updated 6 months ago
- ☆29Updated 5 months ago
- ☆61Updated 4 months ago
- NeurIPS 2022☆39Updated 3 years ago
- ☆34Updated 2 months ago
- Efficient Long-duration Talking Video Synthesis with Linear Diffusion Transformer under Multimodal Guidance☆60Updated last month
- Official implentation of SingingHead: A Large-scale 4D Dataset for Singing Head Animation. (TMM 25)☆60Updated 8 months ago
- [ICCV2025] SemTalk Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis☆35Updated last week
- [CVPR2025] KeyFace: Expressive Audio-Driven Facial Animation for Long Sequences via KeyFrame Interpolation☆67Updated 7 months ago
- This is official inference code of PD-FGC☆97Updated 2 years ago
- [ECCV2024 offical]KMTalk: Speech-Driven 3D Facial Animation with Key Motion Embedding☆34Updated last year
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆86Updated last year
- Official Access to ICIP2024 "THQA: A Perceptual Quality Assessment Database for Talking Heads"☆34Updated 4 months ago
- UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons (ACM MM 2023 Oral)☆54Updated last year
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆29Updated last year
- ☆128Updated last year
- 3D-Aware Face Editing via Warping-Guided Latent Direction Learning☆18Updated last year