aigc3d / LAM_Audio2ExpressionLinks
Generate ARKit expression from audio in realtime
☆185Updated 3 months ago
Alternatives and similar repositories for LAM_Audio2Expression
Users that are interested in LAM_Audio2Expression are comparing it to the libraries listed below
Sorting:
- LLIA - Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Models☆149Updated 7 months ago
- ☆226Updated last year
- Official implementation of the paper "GUAVA: Generalizable Upper Body 3D Gaussian Avatar" [ICCV 2025]☆191Updated 4 months ago
- A 2D customized lip-sync model for high-fidelity real-time driving.☆123Updated 7 months ago
- [CVPR'25] InsTaG: Learning Personalized 3D Talking Head from Few-Second Video☆163Updated 6 months ago
- [INTERSPEECH'24] Official repository for "MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Datase…☆190Updated last year
- ☆200Updated last year
- A lightweight WebGL Render for LAM and LAM_Audio2Expression☆49Updated last month
- [ECCV'24] TalkingGaussian: Structure-Persistent 3D Talking Head Synthesis via Gaussian Splatting☆373Updated 10 months ago
- Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.☆77Updated 4 months ago
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆132Updated 2 years ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆376Updated 2 weeks ago
- ARTalk generates realistic 3D head motions (lip sync, blinking, expressions, head poses) from audio in ⚡ real-time ⚡.