NVIDIA / Audio2Face-3D-SamplesLinks
A service to convert audio to facial blendshapes for lipsyncing and facial performances.
☆196Updated 6 months ago
Alternatives and similar repositories for Audio2Face-3D-Samples
Users that are interested in Audio2Face-3D-Samples are comparing it to the libraries listed below
Sorting:
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆144Updated 7 months ago
- NeuroSync Audio to face animation local inference helper code.☆78Updated 7 months ago
- The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation…☆133Updated 7 months ago
- repo collection for NVIDIA Audio2Face-3D models and tools☆145Updated 3 months ago
- Generate ARKit expression from audio in realtime☆173Updated 2 months ago
- Audio2Face Avatar with Riva SDK functionality☆75Updated 2 years ago
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆126Updated last year
- ☆234Updated 2 years ago
- XVERSE Character UE plugin (XCharacter-UEPlugin) a 3D digital human creation plugin for Unreal Engine 5, developed by XVERSE Technology I…☆44Updated last month
- Drive your metahuman to speak within 1 second.☆11Updated 9 months ago
- An open solution for AI-powered photorealistic digital humans.☆135Updated 5 months ago
- Pytorch reimplementation of audio driven face mesh or blendshape models, including Audio2Mesh, VOCA, etc☆17Updated last year
- NVIDIA ACE samples, workflows, and resources☆294Updated 6 months ago
- ☆168Updated 3 years ago
- Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.☆77Updated 3 months ago
- ☆223Updated last year
- Audio2Face-3D Training Framework for creating custom neural networks that generate realistic facial animations from audio input☆65Updated 2 months ago
- ☆199Updated last year
- CLI tool for recording or replaying Epic Games' live link face capture frames.☆83Updated 2 years ago
- ☆16Updated 2 years ago
- Talk with AI-powered detailed 3D avatar. Use LLM, TTS, Unity, and lip sync to bring the character to life.☆157Updated last year
- High-performance C++/CUDA SDK for running Audio2Emotion and Audio2Face inference with integrated post-processing.☆122Updated 4 months ago
- Blender add-on to implement VOCA neural network.☆61Updated 3 years ago
- ARTalk generates realistic 3D head motions (lip sync, blinking, expressions, head poses) from audio in ⚡ real-time ⚡.☆107Updated 6 months ago
- mediapipe landmark to mixamo skeleton☆41Updated 2 years ago
- [INTERSPEECH'24] Official repository for "MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Datase…☆190Updated last year
- [ECCV 2024] RodinHD: High-Fidelity 3D Avatar Generation with Diffusion Models☆180Updated 11 months ago
- LLIA - Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Models☆145Updated 6 months ago
- Send mediapipe data to unreal engine with livelink.☆45Updated 2 years ago