AnimaVR / NeuroSync_PlayerLinks
The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.
☆121Updated 2 months ago
Alternatives and similar repositories for NeuroSync_Player
Users that are interested in NeuroSync_Player are comparing it to the libraries listed below
Sorting:
- NeuroSync Audio to face animation local inference helper code.☆70Updated 2 months ago
- A multi GPU audio to face animation AI model trainer for your iPhone ARKit data.☆37Updated 2 months ago
- A collection of AI model endpoints you can run locally for a real-time audio2face system. Toy demonstration, not for production. Use this…☆23Updated 2 months ago
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆125Updated 3 months ago
- A service to convert audio to facial blendshapes for lipsyncing and facial performances.☆108Updated last month
- ☆445Updated 3 months ago
- NVIDIA ACE samples, workflows, and resources☆272Updated last month
- Integration for the OpenAI Api in Unreal Engine☆36Updated 8 months ago
- Create face depth frames from images using landmarks☆36Updated 8 months ago
- ☆227Updated 2 years ago
- Web interface to convert text to speech and route it to an Audio2Face streaming player.☆33Updated last year
- An open solution for AI-powered photorealistic digital humans.☆130Updated 3 weeks ago
- The source code of "DINet: deformation inpainting network for realistic face visually dubbing on high resolution video."☆38Updated 11 months ago
- Audio2Face Avatar with Riva SDK functionality☆74Updated 2 years ago
- Official implementation for the SIGGRAPH Asia 2024 paper SPARK: Self-supervised Personalized Real-time Monocular Face Capture☆379Updated last month
- Oculus Lip Sync Compiled for Unreal 5☆50Updated last month
- [ICCV 2025] FaceLift: Learning Generalizable Single Image 3D Face Reconstruction from Synthetic Heads☆393Updated this week
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆407Updated 3 weeks ago
- Full version of wav2lip-onnx including face alignment and face enhancement and more...☆132Updated last month
- A wav2lip Web UI using Gradio☆72Updated last year
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation☆231Updated 4 months ago
- Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation☆205Updated last year
- an android alternative implementation for unreal face live link☆345Updated 2 years ago
- ☆168Updated 2 years ago
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- Talk with AI-powered detailed 3D avatar. Use LLM, TTS, Unity, and lip sync to bring the character to life.☆125Updated 7 months ago
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆116Updated last year
- Send mediapipe data to unreal engine with livelink.☆45Updated 2 years ago
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.☆124Updated last year
- Oculus LipSync Plugin compiled for Unreal Engine 5. This plugin allows you to synchronize the lips of 3D characters in your game with aud…☆96Updated 2 years ago