sowwnn / KFusion-Dual-Domain-for-Speech-to-LandmarksLinks
KAN-based Fusion of Dual Domain for Audio-Driven Landmarks Generation of the model can help you generate an sequence of facial lanmarks from audio input.
☆30Updated 2 months ago
Alternatives and similar repositories for KFusion-Dual-Domain-for-Speech-to-Landmarks
Users that are interested in KFusion-Dual-Domain-for-Speech-to-Landmarks are comparing it to the libraries listed below
Sorting:
- ☆51Updated 5 months ago
- Preprocessing Scipts for Talking Face Generation☆93Updated 11 months ago
- ☆29Updated 5 months ago
- One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024☆65Updated last year
- [CVPR2025] KeyFace: Expressive Audio-Driven Facial Animation for Long Sequences via KeyFrame Interpolation☆68Updated 8 months ago
- [AAAI 2024] stle2talker - Official PyTorch Implementation☆48Updated 4 months ago
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generation☆180Updated last year
- NeurIPS 2022☆39Updated 3 years ago
- A novel apporach for personalized speech-driven 3D facial animation