NVIDIA / Audio2Face-3D-Samples
A service to convert audio to facial blendshapes for lipsyncing and facial performances.
☆69Updated 3 months ago
Alternatives and similar repositories for Audio2Face-3D-Samples:
Users that are interested in Audio2Face-3D-Samples are comparing it to the libraries listed below
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆104Updated 3 weeks ago
- Audio2Face Avatar with Riva SDK functionality☆73Updated 2 years ago
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆80Updated 3 years ago
- Talk with AI-powered detailed 3D avatar. Use LLM, TTS, Unity, and lip sync to bring the character to life.☆78Updated 3 months ago
- Blender add-on to implement VOCA neural network.☆59Updated 2 years ago
- Pytorch reimplementation of audio driven face mesh or blendshape models, including Audio2Mesh, VOCA, etc☆14Updated 7 months ago
- Audio to face local inference helper code.☆42Updated last week
- The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation…☆73Updated last week
- An open solution for AI-powered photorealistic digital humans.☆121Updated last year
- XVERSE Character UE plugin (XCharacter-UEPlugin) a 3D digital human creation plugin for Unreal Engine 5, developed by XVERSE Technology I…☆35Updated 7 months ago
- Web interface to convert text to speech and route it to an Audio2Face streaming player.☆33Updated last year
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆101Updated last year
- Drive your metahuman to speak within 1 second.☆5Updated 3 weeks ago
- A multi GPU audio2face blendshape AI model trainer for your iPhone ARKit data.☆20Updated last week
- Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.☆63Updated this week
- Emotionally responsive Virtual Metahuman CV with Real-Time User Facial Emotion Detection (Unreal Engine 5).☆45Updated 3 months ago
- ☆161Updated 7 months ago
- An MVP that uses Google STT, OpenAI LLM, Nvidia Audio2Face☆63Updated 2 years ago
- NVIDIA ACE samples, workflows, and resources☆258Updated last month
- ☆221Updated last year
- ☆190Updated last year
- CLI tool for recording or replaying Epic Games' live link face capture frames.☆80Updated last year
- Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"☆19Updated last year
- Headbox tool to do facial animation on the Microsoft Rocketbox☆47Updated 2 years ago
- [TOG 2023] HAvatar: High-fidelity Head Avatar via Facial Model ConditionedNeural Radiance Field☆126Updated 8 months ago
- STDFormer: Spatio Temporal Disentanglement Learning for 3D Human Mesh Recovery from Monocular Videos with Transformer☆41Updated last year
- Send mediapipe data to unreal engine with livelink.☆44Updated 2 years ago
- [CVPR2025] KeyFace: Expressive Audio-Driven Facial Animation for Long Sequences via KeyFrame Interpolation☆35Updated this week
- RGBAvatar: Reduced Gaussian Blendshapes for Online Modeling of Head Avatars☆44Updated 2 weeks ago
- Code of LAM: Large Avatar Model for One-shot Animatable Gaussian Head☆156Updated this week