mediatechnologycenter / AvatarForgeLinks
Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"
☆20Updated last year
Alternatives and similar repositories for AvatarForge
Users that are interested in AvatarForge are comparing it to the libraries listed below
Sorting:
- ☆34Updated 3 years ago
- ☆123Updated last year
- R2-Talker: Realistic Real-Time Talking Head Synthesis with Hash Grid Landmarks Encoding and Progressive Multilayer Conditioning☆81Updated last year
- Audio-Visual Generative Adversarial Network for Face Reenactment☆158Updated last year
- The pytorch implementation of our WACV23 paper "Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis".☆148Updated last year
- Wav2Lip-Emotion extends Wav2Lip to modify facial expressions of emotions via L1 reconstruction and pre-trained emotion objectives. We als…☆97Updated 3 years ago
- Something about Talking Head Generation☆32Updated last year
- Aim to accelerate the image-animation-model inference through the inference frameworks such as onnx、tensorrt and openvino.☆76Updated last year
- SyncTalkFace: Talking Face Generation for Precise Lip-syncing via Audio-Lip Memory☆33Updated 2 years ago
- An optimized pipeline for DINet reducing inference latency for up to 60% 🚀. Kudos for the authors of the original repo for this amazing …☆108Updated last year
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- 📖 A curated list of resources dedicated to avatar.☆59Updated 8 months ago
- Implementation of Megaportrait☆45Updated last year
- ☆195Updated last year
- ☆101Updated last year
- [ICCV2023] Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video☆72Updated last year
- ☆73Updated 2 years ago
- [ICCV 2023]ToonTalker: Cross-Domain Face Reenactment☆121Updated 9 months ago
- wav2lip in a Vector Quantized (VQ) space☆28Updated 2 years ago
- ☆95Updated 4 years ago
- Cloned repository from Hugging Face Spaces (CVPR 2022 Demo)☆54Updated 2 years ago
- The code for the paper "Speech Driven Talking Face Generation from a Single Image and an Emotion Condition"☆170Updated 2 years ago
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".☆210Updated last year
- Preprocessing Scipts for Talking Face Generation☆90Updated 6 months ago
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆128Updated last month
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generation☆180Updated last year
- PyTorch implementation of "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"☆214Updated last year
- One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024☆61Updated 9 months ago
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'☆195Updated 2 years ago
- Audio driven video synthesis☆41Updated 2 years ago