zslrmhb / Omniverse-Virtual-AssisstantLinks
Audio2Face Avatar with Riva SDK functionality
☆73Updated 2 years ago
Alternatives and similar repositories for Omniverse-Virtual-Assisstant
Users that are interested in Omniverse-Virtual-Assisstant are comparing it to the libraries listed below
Sorting:
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆114Updated last month
- ☆225Updated last year
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- ☆95Updated 3 years ago
- An MVP that uses Google STT, OpenAI LLM, Nvidia Audio2Face☆64Updated 2 years ago
- An open solution for AI-powered photorealistic digital humans.☆123Updated last year
- Blender add-on to implement VOCA neural network.☆59Updated 2 years ago
- A service to convert audio to facial blendshapes for lipsyncing and facial performances.☆88Updated last month
- Pytorch reimplementation of audio driven face mesh or blendshape models, including Audio2Mesh, VOCA, etc☆14Updated 9 months ago
- ☆194Updated last year
- A curated list of resources of audio-driven talking face generation☆141Updated 2 years ago
- [CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models☆221Updated last year
- Web interface to convert text to speech and route it to an Audio2Face streaming player.☆33Updated last year
- An optimized pipeline for DINet reducing inference latency for up to 60% 🚀. Kudos for the authors of the original repo for this amazing …☆107Updated last year
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆105Updated last year
- Audio-driven facial animation generator with BiLSTM used for transcribing the speech and web interface displaying the avatar and the anim…☆35Updated 2 years ago
- ☆123Updated last year
- ☆162Updated last year
- 📖 A curated list of resources dedicated to avatar.☆58Updated 6 months ago
- This project is a digital human that can talk to you and is animated based on your questions. It uses the Nvidia API endpoint Meta llama3…☆55Updated 10 months ago
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆126Updated last year
- Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.☆135Updated last year
- Web-first SDK that provides real-time ARKit-compatible 52 blend shapes from a camera feed, video or image at 60 FPS using ML models.☆84Updated 2 years ago
- [CVPR 2023] CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior☆577Updated last year
- R2-Talker: Realistic Real-Time Talking Head Synthesis with Hash Grid Landmarks Encoding and Progressive Multilayer Conditioning☆80Updated last year
- Faster Talking Face Animation on Xeon CPU☆128Updated last year
- Speech-Driven Expression Blendshape Based on Single-Layer Self-attention Network (AIWIN 2022)☆76Updated 2 years ago
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".☆208Updated last year
- This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"☆373Updated last year
- A multi GPU audio to face animation AI model trainer for your iPhone ARKit data.☆29Updated last week