Jason-cs18 / awesome-avatarLinks
π A curated list of resources dedicated to avatar.
β60Updated last year
Alternatives and similar repositories for awesome-avatar
Users that are interested in awesome-avatar are comparing it to the libraries listed below
Sorting:
- One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024β65Updated last year
- R2-Talker: Realistic Real-Time Talking Head Synthesis with Hash Grid Landmarks Encoding and Progressive Multilayer Conditioningβ82Updated last year
- A Real-Time High-Definition Teeth Restoration Network for ArbitraryTalking Face Generation Methodsβ146Updated 2 years ago
- PyTorch implementation of "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"β213Updated 2 years ago
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generationβ180Updated last year
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".β214Updated last year
- Preprocessing Scipts for Talking Face Generationβ92Updated 11 months ago
- An optimized pipeline for DINet reducing inference latency for up to 60% π. Kudos for the authors of the original repo for this amazing β¦β109Updated 2 years ago
- Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.β77Updated 3 months ago
- Faster Talking Face Animation on Xeon CPUβ129Updated 2 years ago
- [ICCV2023] Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Videoβ75Updated last year
- This is the official source for our ACM MM 2023 paper "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking β¦β142Updated 2 years ago
- Something about Talking Head Generationβ32Updated 2 years ago
- [WACV 2024] "CVTHead: One-shot Controllable Head Avatar with Vertex-feature Transformer"β129Updated last year
- The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Gβ¦β254Updated 2 years ago
- β123Updated last year
- Official project repo for paper "Speech Driven Video Editing via an Audio-Conditioned Diffusion Model"β229Updated 2 years ago
- β199Updated last year
- This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"β403Updated last year
- DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformerβ165Updated last year
- β175Updated 2 years ago
- Official implementation of the CVPR 2024 paper "FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearβ¦β119Updated last month
- Official code for ICCV 2023 paper: "Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation".β294Updated 6 months ago
- wav2lip in a Vector Quantized (VQ) spaceβ27Updated 2 years ago
- β222Updated last year
- [CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Modelsβ235Updated last year
- β72Updated 2 years ago
- Unofficial implementation of the paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing" (CVPR 2021 Oral)β173Updated 4 years ago
- the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"β106Updated last year
- β175Updated last year