Official repository for Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation
☆489Apr 15, 2024Updated last year
Alternatives and similar repositories for diffused-heads
Users that are interested in diffused-heads are comparing it to the libraries listed below
Sorting:
- [CVPR2023] The implementation for "DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven Portraits Animation"☆471Jul 15, 2024Updated last year
- ☆526Dec 26, 2023Updated 2 years ago
- Official project repo for paper "Speech Driven Video Editing via an Audio-Conditioned Diffusion Model"☆229Jun 30, 2023Updated 2 years ago
- ☆428Nov 1, 2023Updated 2 years ago
- PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)☆379Jan 12, 2025Updated last year
- Official code of CVPR '23 paper "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"☆325Aug 8, 2023Updated 2 years ago
- [CVPR 2023] DPE: Disentanglement of Pose and Expression for General Video Portrait Editing☆453Feb 27, 2024Updated 2 years ago
- [CVPR 2023] CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior☆609Sep 20, 2023Updated 2 years ago
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'☆201Apr 28, 2023Updated 2 years ago
- Official code for ICCV 2023 paper: "Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation".☆300May 30, 2025Updated 9 months ago
- ☆841Nov 19, 2025Updated 3 months ago
- ☆1,880Aug 3, 2025Updated 6 months ago
- CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors☆739Jan 6, 2024Updated 2 years ago
- [ECCV 2022] StyleHEAT: A framework for high-resolution editable talking face generation