semchan / HyperLipsLinks
Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".
ā208Updated last year
Alternatives and similar repositories for HyperLips
Users that are interested in HyperLips are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"ā212Updated last year
- An optimized pipeline for DINet reducing inference latency for up to 60% š. Kudos for the authors of the original repo for this amazing ā¦ā107Updated last year
- Official code of CVPR '23 paper "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"ā318Updated last year
- The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Gā¦ā252Updated last year
- A Real-Time High-Definition Teeth Restoration Network for ArbitraryTalking Face Generation Methodsā141Updated last year
- R2-Talker: Realistic Real-Time Talking Head Synthesis with Hash Grid Landmarks Encoding and Progressive Multilayer Conditioningā80Updated last year
- ā417Updated last year
- Audio-Visual Generative Adversarial Network for Face Reenactmentā158Updated last year
- This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"ā375Updated last year
- Official code for ICCV 2023 paper: "Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation".ā292Updated 3 weeks ago
- the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"ā386Updated last year
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'ā194Updated 2 years ago
- ā123Updated last year
- PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)ā368Updated 5 months ago
- Official project repo for paper "Speech Driven Video Editing via an Audio-Conditioned Diffusion Model"ā229Updated last year
- ā515Updated last year
- ā158Updated last year
- [CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Modelsā222Updated last year
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generationā180Updated last year
- [ECCV2022] The implementation for "Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head Synthesis".ā343Updated 2 years ago
- [CVPR 2023] CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Priorā580Updated last year
- One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024ā61Updated 7 months ago
- [CVPR2023] OTAvatar: One-shot Talking Face Avatar with Controllable Tri-plane Rendering.ā323Updated last year
- ā358Updated 10 months ago
- [CVPR2023] The implementation for "DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven Portraits Animation"ā467Updated 11 months ago
- Using Claude Opus to reverse engineer code from MegaPortraits: One-shot Megapixel Neural Head Avatarsā93Updated 7 months ago
- Faster Talking Face Animation on Xeon CPUā128Updated last year
- Code for One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)ā358Updated 2 years ago
- A curated list of resources of audio-driven talking face generationā141Updated 2 years ago
- š A curated list of resources dedicated to avatar.ā59Updated 7 months ago