Rudrabha / LipGANLinks
This repository contains the codes for LipGAN. LipGAN was published as a part of the paper titled "Towards Automatic Face-to-Face Translation".
☆613Updated 3 months ago
Alternatives and similar repositories for LipGAN
Users that are interested in LipGAN are comparing it to the libraries listed below
Sorting:
- This is the repository containing codes for our CVPR, 2020 paper titled "Learning Individual Speaking Styles for Accurate Lip to Speech S…☆709Updated 2 years ago
- ☆1,020Updated last year
- Code for "Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose" (Arxiv 2020) and "Predicting Personalize…☆767Updated last year
- ☆513Updated 2 months ago
- ☆537Updated 2 years ago
- Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)☆951Updated last year
- This codebase demonstrates how to synthesize realistic 3D character animations given an arbitrary speech signal and a static character me…☆1,235Updated last year
- ObamaNet : Photo-realistic lip-sync from audio (Unofficial port)☆238Updated 7 years ago
- An implementation of ObamaNet: Photo-realistic lip-sync from text.☆126Updated 6 years ago
- Extension of Wav2Lip repository for processing high-quality videos.☆541Updated 2 years ago
- My implementation of Few-Shot Adversarial Learning of Realistic Neural Talking Head Models (Egor Zakharov et al.).☆830Updated 3 years ago
- ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".☆436Updated 2 years ago
- Live real-time avatars from your webcam in the browser. No dedicated hardware or software installation needed. A pure Google Colab wrappe…☆362Updated 7 months ago
- Human Video Generation Paper List☆474Updated last year
- Out of time: automated lip sync in the wild☆823Updated last year
- Code for Talking Face Generation by Adversarially Disentangled Audio-Visual Representation (AAAI 2019)☆816Updated 4 years ago
- Real-Time Lip Sync for Live 2D Animation☆145Updated 5 years ago
- ☆208Updated 4 years ago
- PyTorch implementation for Head2Head and Head2Head++. It can be used to fully transfer the head pose, facial expression and eye movements…☆313Updated 4 years ago
- Code for Motion Representations for Articulated Animation paper☆1,267Updated 4 months ago
- The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"☆547Updated 3 years ago
- A repository for generating stylized talking 3D and 3D face☆279Updated 3 years ago
- Our implementation of "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" (Egor Zakharov et al.)☆590Updated 2 years ago
- Generating Talking Face Landmarks from Speech☆159Updated 2 years ago
- Reference code for "Motion-supervised Co-Part Segmentation" paper☆661Updated 2 years ago
- [ECCV 2022] StyleHEAT: A framework for high-resolution editable talking face generation☆654Updated 2 years ago
- ☆964Updated 2 years ago
- CVPR 2019☆257Updated 2 years ago
- Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation☆995Updated last year
- PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)☆376Updated 9 months ago