Rudrabha / LipGANLinks
This repository contains the codes for LipGAN. LipGAN was published as a part of the paper titled "Towards Automatic Face-to-Face Translation".
☆613Updated 2 months ago
Alternatives and similar repositories for LipGAN
Users that are interested in LipGAN are comparing it to the libraries listed below
Sorting:
- This is the repository containing codes for our CVPR, 2020 paper titled "Learning Individual Speaking Styles for Accurate Lip to Speech S…☆706Updated 2 years ago
- Code for "Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose" (Arxiv 2020) and "Predicting Personalize…☆766Updated last year
- ☆1,012Updated last year
- ☆513Updated 2 weeks ago
- ObamaNet : Photo-realistic lip-sync from audio (Unofficial port)☆239Updated 7 years ago
- This codebase demonstrates how to synthesize realistic 3D character animations given an arbitrary speech signal and a static character me…☆1,225Updated last year
- ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".☆439Updated 2 years ago
- Code for Talking Face Generation by Adversarially Disentangled Audio-Visual Representation (AAAI 2019)☆817Updated 4 years ago
- ☆539Updated 2 years ago
- Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)☆949Updated last year
- An implementation of ObamaNet: Photo-realistic lip-sync from text.☆127Updated 6 years ago
- Human Video Generation Paper List☆474Updated last year
- Live real-time avatars from your webcam in the browser. No dedicated hardware or software installation needed. A pure Google Colab wrappe…☆364Updated 5 months ago
- Extension of Wav2Lip repository for processing high-quality videos.☆540Updated 2 years ago
- ☆208Updated 4 years ago
- Out of time: automated lip sync in the wild☆812Updated last year
- My implementation of Few-Shot Adversarial Learning of Realistic Neural Talking Head Models (Egor Zakharov et al.).☆831Updated 3 years ago
- PyTorch implementation for Head2Head and Head2Head++. It can be used to fully transfer the head pose, facial expression and eye movements…☆313Updated 4 years ago
- Code for Motion Representations for Articulated Animation paper☆1,266Updated 3 months ago
- Generating Talking Face Landmarks from Speech☆159Updated 2 years ago
- A repository for generating stylized talking 3D and 3D face☆279Updated 3 years ago
- Animating Arbitrary Objects via Deep Motion Transfer☆475Updated 2 years ago
- This repository contains the code for my master thesis on Emotion-Aware Facial Animation☆147Updated 2 years ago
- [ECCV 2022] StyleHEAT: A framework for high-resolution editable talking face generation☆655Updated 2 years ago
- Real-Time Lip Sync for Live 2D Animation☆142Updated 5 years ago
- CVPR 2019☆258Updated 2 years ago
- Reference code for "Motion-supervised Co-Part Segmentation" paper☆661Updated 2 years ago
- AudioDVP:Photorealistic Audio-driven Video Portraits☆300Updated last year
- The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"☆547Updated 3 years ago
- ☆962Updated last year