NVIDIA / Audio2Face-3D-SamplesLinks
A service to convert audio to facial blendshapes for lipsyncing and facial performances.
☆176Updated 5 months ago
Alternatives and similar repositories for Audio2Face-3D-Samples
Users that are interested in Audio2Face-3D-Samples are comparing it to the libraries listed below
Sorting:
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆139Updated 6 months ago
- The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation…☆131Updated 5 months ago
- NeuroSync Audio to face animation local inference helper code.☆76Updated 5 months ago
- repo collection for NVIDIA Audio2Face-3D models and tools☆107Updated last month
- Generate ARKit expression from audio in realtime☆160Updated 3 weeks ago
- Audio2Face Avatar with Riva SDK functionality☆75Updated 2 years ago
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- ☆233Updated 2 years ago
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆123Updated last year
- An open solution for AI-powered photorealistic digital humans.☆132Updated 4 months ago
- ☆217Updated last year
- NVIDIA ACE samples, workflows, and resources☆289Updated 4 months ago
- Audio2Face-3D Training Framework for creating custom neural networks that generate realistic facial animations from audio input☆57Updated last month
- CLI tool for recording or replaying Epic Games' live link face capture frames.☆83Updated 2 years ago
- Drive your metahuman to speak within 1 second.☆12Updated 7 months ago
- Talk with AI-powered detailed 3D avatar. Use LLM, TTS, Unity, and lip sync to bring the character to life.☆151Updated 10 months ago
- ☆169Updated 3 years ago
- ARTalk generates realistic 3D head motions (lip sync, blinking, expressions, head poses) from audio in ⚡ real-time ⚡.☆100Updated 5 months ago
- Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.☆76Updated last month
- [INTERSPEECH'24] Official repository for "MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Datase…☆187Updated last year
- [CVPR'25] InsTaG: Learning Personalized 3D Talking Head from Few-Second Video☆152Updated 4 months ago
- ☆51Updated 4 months ago
- ☆195Updated last year
- Pytorch reimplementation of audio driven face mesh or blendshape models, including Audio2Mesh, VOCA, etc☆16Updated last year
- The Data and Code of Prompt2Sign: A Comprehensive Multilingual Sign Language Dataset.☆198Updated 3 months ago
- 中文到表情☆31Updated 3 years ago
- mediapipe landmark to mixamo skeleton☆40Updated 2 years ago
- LLIA - Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Models☆138Updated 5 months ago
- Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)☆72Updated 5 months ago
- ☆95Updated 3 months ago