NVIDIA / Audio2Face-3D-SamplesLinks
A service to convert audio to facial blendshapes for lipsyncing and facial performances.
☆187Updated 5 months ago
Alternatives and similar repositories for Audio2Face-3D-Samples
Users that are interested in Audio2Face-3D-Samples are comparing it to the libraries listed below
Sorting:
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆140Updated 7 months ago
- NeuroSync Audio to face animation local inference helper code.☆78Updated 6 months ago
- The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation…☆131Updated 6 months ago
- Generate ARKit expression from audio in realtime☆166Updated last month
- repo collection for NVIDIA Audio2Face-3D models and tools☆128Updated 2 months ago
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆125Updated last year
- Audio2Face Avatar with Riva SDK functionality☆75Updated 2 years ago
- ☆233Updated 2 years ago
- Drive your metahuman to speak within 1 second.☆11Updated 8 months ago
- Talk with AI-powered detailed 3D avatar. Use LLM, TTS, Unity, and lip sync to bring the character to life.☆154Updated 11 months ago
- Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.☆75Updated 2 months ago
- XVERSE Character UE plugin (XCharacter-UEPlugin) a 3D digital human creation plugin for Unreal Engine 5, developed by XVERSE Technology I…☆42Updated 3 weeks ago
- ☆221Updated last year
- NVIDIA ACE samples, workflows, and resources☆292Updated 5 months ago
- Pytorch reimplementation of audio driven face mesh or blendshape models, including Audio2Mesh, VOCA, etc☆17Updated last year
- mediapipe landmark to mixamo skeleton☆41Updated 2 years ago
- Audio2Face-3D Training Framework for creating custom neural networks that generate realistic facial animations from audio input☆62Updated last month
- ☆198Updated last year
- Blender add-on to implement VOCA neural network.☆61Updated 3 years ago
- ARTalk generates realistic 3D head motions (lip sync, blinking, expressions, head poses) from audio in ⚡ real-time ⚡.☆105Updated 5 months ago
- ☆168Updated 3 years ago
- [CVPR'25] InsTaG: Learning Personalized 3D Talking Head from Few-Second Video☆154Updated 4 months ago
- ☆51Updated 5 months ago
- CLI tool for recording or replaying Epic Games' live link face capture frames.☆83Updated 2 years ago
- [INTERSPEECH'24] Official repository for "MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Datase…☆188Updated last year
- An open solution for AI-powered photorealistic digital humans.☆135Updated 4 months ago
- DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models☆331Updated 8 months ago
- VASA-1☆103Updated last year
- LLIA - Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Models☆141Updated 5 months ago