AnimaVR / NeuroSync_Local_APILinks
NeuroSync Audio to face animation local inference helper code.
☆57Updated 2 weeks ago
Alternatives and similar repositories for NeuroSync_Local_API
Users that are interested in NeuroSync_Local_API are comparing it to the libraries listed below
Sorting:
- The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation…☆97Updated last week
- A collection of AI model endpoints you can run locally for a real-time audio2face system. Toy demonstration, not for production. Use this…☆18Updated last week
- A multi GPU audio to face animation AI model trainer for your iPhone ARKit data.☆29Updated last week
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆114Updated last month
- A service to convert audio to facial blendshapes for lipsyncing and facial performances.☆88Updated last month
- An open solution for AI-powered photorealistic digital humans.☆123Updated last year
- Audio2Face Avatar with Riva SDK functionality☆73Updated 2 years ago
- Create face depth frames from images using landmarks☆36Updated 6 months ago
- This Unreal Engine sample project demonstrates how to bring Epic Games' MetaHuman digital characters to life using the Amazon Polly text-…☆49Updated 2 years ago
- Emotionally responsive Virtual Metahuman CV with Real-Time User Facial Emotion Detection (Unreal Engine 5).☆45Updated 4 months ago
- Oculus LipSync Plugin compiled for Unreal Engine 5. This plugin allows you to synchronize the lips of 3D characters in your game with aud…☆94Updated 2 years ago
- Web interface to convert text to speech and route it to an Audio2Face streaming player.☆33Updated last year
- Fast running Live Portrait with TensorRT and ONNX models☆161Updated 10 months ago
- Face Depth Frame Mancer Documentation☆25Updated 4 months ago
- Virtual Actor : Motion capture for your 3D avatar.☆95Updated 2 years ago
- Talk with AI-powered detailed 3D avatar. Use LLM, TTS, Unity, and lip sync to bring the character to life.☆99Updated 5 months ago
- A UE5 plugin for improving the Metahuman ARKit face tracking.☆93Updated last year
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- Pose Asset with visemes for Epic's MetaHuman face skeleton☆51Updated 2 years ago
- NVIDIA ACE samples, workflows, and resources☆269Updated last month
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆105Updated last year
- Motion Capture runtime for UE4☆79Updated 4 years ago
- Send mediapipe data to unreal engine with livelink.☆45Updated 2 years ago
- XVERSE Character UE plugin (XCharacter-UEPlugin) a 3D digital human creation plugin for Unreal Engine 5, developed by XVERSE Technology I…☆39Updated 9 months ago
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.☆123Updated 10 months ago
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation☆227Updated 2 months ago
- Pytorch reimplementation of audio driven face mesh or blendshape models, including Audio2Mesh, VOCA, etc☆14Updated 8 months ago
- ☆194Updated last year
- convert VMCProcotol to MOPProcotol☆58Updated 4 years ago
- Drive your metahuman to speak within 1 second.☆6Updated 2 months ago