AnimaVR / NeuroSync_Local_APILinks
NeuroSync Audio to face animation local inference helper code.
☆76Updated 5 months ago
Alternatives and similar repositories for NeuroSync_Local_API
Users that are interested in NeuroSync_Local_API are comparing it to the libraries listed below
Sorting:
- The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation…☆130Updated 5 months ago
- A service to convert audio to facial blendshapes for lipsyncing and facial performances.☆164Updated 4 months ago
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆139Updated 5 months ago
- NVIDIA ACE samples, workflows, and resources☆288Updated 4 months ago
- Talk with AI-powered detailed 3D avatar. Use LLM, TTS, Unity, and lip sync to bring the character to life.☆145Updated 10 months ago
- An open solution for AI-powered photorealistic digital humans.☆132Updated 3 months ago
- Web interface to convert text to speech and route it to an Audio2Face streaming player.☆34Updated last year
- Full version of wav2lip-onnx including face alignment and face enhancement and more...☆139Updated 4 months ago
- VASA-1☆103Updated last year
- Project that allows Realtime recording of the audio, and lip syncs the image.☆91Updated last year
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.☆126Updated last year
- Audio2Face Avatar with Riva SDK functionality☆74Updated 2 years ago
- Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation☆207Updated last year
- This project is a digital human that can talk to you and is animated based on your questions. It uses the Nvidia API endpoint Meta llama3…☆61Updated last year
- Fast running Live Portrait with TensorRT and ONNX models☆171Updated last year
- [IJCV 2025] Unlock Pose Diversity: Accurate and Efficient Implicit Keypoint-based Spatiotemporal Diffusion for Audio-driven Talking Portr…☆279Updated last month
- ☆232Updated 2 years ago
- ☆126Updated 2 years ago
- The source code of "DINet: deformation inpainting network for realistic face visually dubbing on high resolution video."☆38Updated last year
- AI 3D avatar voice interface in browser. VAD -> STT -> LLM -> TTS -> VRM (Prototype/Proof-of-Concept)☆71Updated 2 years ago
- 🤢 LipSick: Fast, High Quality, Low Resource Lipsync Tool 🤮☆219Updated last year
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆307Updated 4 months ago
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆269Updated 2 months ago
- A quality zero-shot lipsync pipeline built with MuseTalk, LivePortrait, and CodeFormer.☆46Updated last year
- This Unreal Engine sample project demonstrates how to bring Epic Games' MetaHuman digital characters to life using the Amazon Polly text-…☆49Updated 2 years ago
- One-shot face animation using webcam, capable of running in real time.☆38Updated last year
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆416Updated 2 months ago
- Generate ARKit expression from audio in realtime☆154Updated this week
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆536Updated 3 months ago
- wip - running some training with overfitting - https://wandb.ai/snoozie/vasa-overfitting☆299Updated last week