Jason-cs18 / awesome-avatar
π A curated list of resources dedicated to avatar.
β58Updated 4 months ago
Alternatives and similar repositories for awesome-avatar:
Users that are interested in awesome-avatar are comparing it to the libraries listed below
- Preprocessing Scipts for Talking Face Generationβ86Updated 2 months ago
- One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024β60Updated 4 months ago
- Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.β62Updated this week
- R2-Talker: Realistic Real-Time Talking Head Synthesis with Hash Grid Landmarks Encoding and Progressive Multilayer Conditioningβ80Updated last year
- β155Updated 6 months ago
- A Real-Time High-Definition Teeth Restoration Network for ArbitraryTalking Face Generation Methodsβ138Updated last year
- [ICASSP 2024] DiffDub: Person-generic visual dubbing using inpainting renderer with diffusion auto-encoderβ56Updated 8 months ago
- [INTERSPEECH'24] Official repository for "MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Dataseβ¦β89Updated 4 months ago
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".β202Updated last year
- [WACV 2024] "CVTHead: One-shot Controllable Head Avatar with Vertex-feature Transformer"β116Updated last year
- β124Updated 10 months ago
- β152Updated last year
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generationβ178Updated 11 months ago
- β74Updated last year
- Official implementation of the CVPR 2024 paper "FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearβ¦β107Updated 5 months ago
- A novel apporach for personalized speech-driven 3D facial animationβ46Updated 10 months ago
- This is the official source for our ACM MM 2023 paper "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking β¦β138Updated last year
- [ICCV2023] Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Videoβ67Updated 11 months ago
- β40Updated 2 weeks ago
- An optimized pipeline for DINet reducing inference latency for up to 60% π. Kudos for the authors of the original repo for this amazing β¦β106Updated last year
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animationβ123Updated last year
- Something about Talking Head Generationβ32Updated last year
- PyTorch implementation of "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"β211Updated last year
- KAN-based Fusion of Dual Domain for Audio-Driven Landmarks Generation of the model can help you generate an sequence of facial lanmarks fβ¦β27Updated last month
- This is official inference code of PD-FGCβ84Updated last year
- The pytorch implementation of our WACV23 paper "Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis".β148Updated last year
- β145Updated last year
- [ICCV 2023]ToonTalker: Cross-Domain Face Reenactmentβ117Updated 4 months ago
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'β194Updated last year
- This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"β368Updated last year