ajay-sainy / Wav2Lip-GFPGAN
High quality Lip sync
☆1,112Updated 9 months ago
Alternatives and similar repositories for Wav2Lip-GFPGAN:
Users that are interested in Wav2Lip-GFPGAN are comparing it to the libraries listed below
- The source code of "DINet: deformation inpainting network for realistic face visually dubbing on high resolution video."☆1,059Updated last year
- Wav2Lip UHQ extension for Automatic1111☆1,365Updated 10 months ago
- 本项目基于SadTalkers实现视频唇形合成的Wav2lip。通过以视频文件方式进行语音驱动生成唇形,设置面部区域可配置的增强方式进行合成唇形(人脸)区域画面增强,提高生成唇形的清晰度。使用DAIN 插帧的DL算法对生成视频进行补帧,补充帧间合成唇形的动作过渡,使合成的唇…☆1,965Updated last year
- High-Fidelity Lip-Syncing with Wav2Lip and Real-ESRGAN☆450Updated last year
- ☆625Updated last year
- Colab for making Wav2Lip high quality and easy to use☆803Updated 11 months ago
- [CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"☆1,461Updated 8 months ago
- Real-time Neural Radiance Talking Portrait Synthesis via Audio-spatial Decomposition☆917Updated last year
- CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors☆726Updated last year
- [ICCV'23] Efficient Region-Aware Neural Radiance Fields for High-Fidelity Talking Portrait Synthesis☆1,172Updated last month
- Extension of Wav2Lip repository for processing high-quality videos.☆541Updated 2 years ago
- GeneFace++: Generalized and Stable Real-Time 3D Talking Face Generation; Official Code☆1,693Updated 6 months ago
- GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code☆2,600Updated 6 months ago
- VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior☆788Updated last year
- Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis; ICLR 2024 Spotlight; Official code☆1,028Updated 6 months ago
- ☆199Updated last year
- Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)☆1,254Updated last year
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆1,710Updated last year
- ☆354Updated 8 months ago
- code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021☆342Updated last year
- ☆599Updated last year
- [CVPR2023] The implementation for "DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven Portraits Animation"☆466Updated 9 months ago
- A simple and open-source analogue of the HeyGen system☆949Updated 9 months ago
- ☆220Updated last year
- ☆834Updated last year
- ☆514Updated last year
- PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)☆364Updated 3 months ago
- This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".☆1,052Updated last year
- Real time streaming talking head☆471Updated 11 months ago
- ☆418Updated last year