AnimaVR / NeuroSync_Real-Time_API
A collection of AI model endpoints you can run locally for a real-time audio2face system. Toy demonstration, not for production. Use this to learn!
☆18Updated last month
Alternatives and similar repositories for NeuroSync_Real-Time_API
Users that are interested in NeuroSync_Real-Time_API are comparing it to the libraries listed below
Sorting:
- Audio to face local inference helper code.☆51Updated last month
- The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation…☆88Updated 3 weeks ago
- A multi GPU audio2face blendshape AI model trainer for your iPhone ARKit data.☆27Updated last week
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆112Updated 2 weeks ago
- A service to convert audio to facial blendshapes for lipsyncing and facial performances.☆79Updated last month
- Create face depth frames from images using landmarks☆35Updated 5 months ago
- convert VMCProcotol to MOPProcotol☆58Updated 4 years ago
- An open solution for AI-powered photorealistic digital humans.☆123Updated last year
- ☆163Updated 2 years ago
- Motion Capture runtime for UE4☆79Updated 4 years ago
- Oculus Lip Sync Compiled for Unreal 5☆43Updated this week
- A UE5 plugin for improving the Metahuman ARKit face tracking.☆93Updated last year
- Audio2Face Avatar with Riva SDK functionality☆73Updated 2 years ago
- Web interface to convert text to speech and route it to an Audio2Face streaming player.☆33Updated last year
- UE5 MediaPipe free plugin motion capture and facial☆12Updated 2 years ago
- Send mediapipe data to unreal engine with livelink.☆44Updated 2 years ago
- Emotionally responsive Virtual Metahuman CV with Real-Time User Facial Emotion Detection (Unreal Engine 5).☆45Updated 4 months ago
- Oculus LipSync Plugin compiled for Unreal Engine 5. This plugin allows you to synchronize the lips of 3D characters in your game with aud…☆94Updated 2 years ago
- Phiz is a tool that allows you to perform facial motion capture from any device and location.☆128Updated 2 years ago
- NVIDIA ACE samples, workflows, and resources☆266Updated 2 weeks ago
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆103Updated last year
- ☆421Updated last week
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆81Updated 3 years ago
- Face Depth Frame Mancer Documentation☆24Updated 4 months ago
- Pytorch reimplementation of audio driven face mesh or blendshape models, including Audio2Mesh, VOCA, etc☆14Updated 8 months ago
- ☆225Updated last year
- ☆193Updated last year
- LiveLink Source for receiving JSON over sockets.☆103Updated 5 years ago
- ☆148Updated 8 months ago
- CLI tool for recording or replaying Epic Games' live link face capture frames.☆80Updated last year