aigc3d / LAM_Audio2ExpressionLinks
Generate ARKit expression from audio in realtime
☆133Updated 2 months ago
Alternatives and similar repositories for LAM_Audio2Expression
Users that are interested in LAM_Audio2Expression are comparing it to the libraries listed below
Sorting:
- ☆198Updated 11 months ago
- LLIA - Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Models☆110Updated last month
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆242Updated this week
- [INTERSPEECH'24] Official repository for "MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Datase…☆171Updated 9 months ago
- [CVPR'25] InsTaG: Learning Personalized 3D Talking Head from Few-Second Video☆135Updated 3 weeks ago
- ☆188Updated 7 months ago
- EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆154Updated this week
- Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.☆75Updated 2 weeks ago
- A 2D customized lip-sync model for high-fidelity real-time driving.☆70Updated last month
- R2-Talker: Realistic Real-Time Talking Head Synthesis with Hash Grid Landmarks Encoding and Progressive Multilayer Conditioning☆81Updated last year
- [SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head☆684Updated 2 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆349Updated last month
- ☆330Updated last month
- ☆44Updated last month
- ☆194Updated last year
- A lightweight WebGL Render for LAM and LAM_Audio2Expression☆32Updated last month
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation☆231Updated 4 months ago
- One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024☆62Updated 9 months ago
- This is a project about talking faces. We use 576X576 sized facial images for training, which can generate 2k, 4k, 6k, and 8k digital hum…☆55Updated last year
- [ECCV'24] TalkingGaussian: Structure-Persistent 3D Talking Head Synthesis via Gaussian Splatting☆350Updated 4 months ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆429Updated 2 weeks ago
- LinguaLinker: Audio-Driven Portraits Animation with Implicit Facial Control Enhancement☆74Updated last year
- Pytorch Code for "BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthesis"☆301Updated 8 months ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆361Updated last month
- [CVPR 2024] Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework.☆352Updated 6 months ago
- [CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models☆225Updated last year
- [CVPR2025] KeyFace: Expressive Audio-Driven Facial Animation for Long Sequences via KeyFrame Interpolation☆53Updated 4 months ago
- Realtime Video and Audio Streaming with WebRTC and Gradio☆61Updated last month
- Official repo for FaceShot: Bring Any Character into Life☆71Updated last month
- [ICCV 2025] GaussianSpeech: Audio-Driven Gaussian Avatars☆160Updated 8 months ago