Rudrabha / LipGANLinks
This repository contains the codes for LipGAN. LipGAN was published as a part of the paper titled "Towards Automatic Face-to-Face Translation".
☆612Updated last month
Alternatives and similar repositories for LipGAN
Users that are interested in LipGAN are comparing it to the libraries listed below
Sorting:
- This is the repository containing codes for our CVPR, 2020 paper titled "Learning Individual Speaking Styles for Accurate Lip to Speech S…☆709Updated 2 years ago
- Code for "Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose" (Arxiv 2020) and "Predicting Personalize…☆766Updated last year
- ☆1,014Updated last year
- ☆513Updated 3 years ago
- ObamaNet : Photo-realistic lip-sync from audio (Unofficial port)☆239Updated 7 years ago
- Human Video Generation Paper List☆473Updated last year
- ☆538Updated 2 years ago
- Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)☆946Updated last year
- This codebase demonstrates how to synthesize realistic 3D character animations given an arbitrary speech signal and a static character me…☆1,218Updated 11 months ago
- An implementation of ObamaNet: Photo-realistic lip-sync from text.☆127Updated 6 years ago
- Code for Talking Face Generation by Adversarially Disentangled Audio-Visual Representation (AAAI 2019)☆818Updated 4 years ago
- Live real-time avatars from your webcam in the browser. No dedicated hardware or software installation needed. A pure Google Colab wrappe…☆363Updated 4 months ago
- Extension of Wav2Lip repository for processing high-quality videos.☆540Updated 2 years ago
- ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".☆437Updated 2 years ago
- Out of time: automated lip sync in the wild☆806Updated last year
- My implementation of Few-Shot Adversarial Learning of Realistic Neural Talking Head Models (Egor Zakharov et al.).☆833Updated 3 years ago
- ☆207Updated 4 years ago
- PyTorch implementation for Head2Head and Head2Head++. It can be used to fully transfer the head pose, facial expression and eye movements…☆314Updated 3 years ago
- This repository contains the code for my master thesis on Emotion-Aware Facial Animation☆147Updated 2 years ago
- ☆963Updated last year
- Reference code for "Motion-supervised Co-Part Segmentation" paper☆660Updated 2 years ago
- Our implementation of "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" (Egor Zakharov et al.)☆590Updated 2 years ago
- A repository for generating stylized talking 3D and 3D face☆279Updated 3 years ago
- Generating Talking Face Landmarks from Speech☆159Updated 2 years ago
- Code for Motion Representations for Articulated Animation paper☆1,266Updated 2 months ago
- Animating Arbitrary Objects via Deep Motion Transfer☆475Updated 2 years ago
- CVPR 2019☆258Updated 2 years ago
- The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"☆542Updated 3 years ago
- Real-Time Lip Sync for Live 2D Animation☆142Updated 5 years ago
- Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation☆991Updated last year