zslrmhb / Omniverse-Virtual-Assisstant
Audio2Face Avatar with Riva SDK functionality
☆72Updated 2 years ago
Alternatives and similar repositories for Omniverse-Virtual-Assisstant:
Users that are interested in Omniverse-Virtual-Assisstant are comparing it to the libraries listed below
- ☆220Updated last year
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆78Updated 2 years ago
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆99Updated this week
- An MVP that uses Google STT, OpenAI LLM, Nvidia Audio2Face☆62Updated 2 years ago
- Audio-driven facial animation generator with BiLSTM used for transcribing the speech and web interface displaying the avatar and the anim…☆35Updated 2 years ago
- An open solution for AI-powered photorealistic digital humans.☆117Updated last year
- ☆94Updated 3 years ago
- Web interface to convert text to speech and route it to an Audio2Face streaming player.☆30Updated last year
- Blender add-on to implement VOCA neural network.☆59Updated 2 years ago
- lipsync is a simple and updated Python library for lip synchronization, based on Wav2Lip. It synchronizes lips in videos and images based…☆110Updated 2 months ago
- ☆187Updated 11 months ago
- A curated list of resources of audio-driven talking face generation☆141Updated 2 years ago
- 3D models powered by ChatGPT☆73Updated 8 months ago
- Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"☆19Updated last year
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆101Updated last year
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".☆202Updated last year
- [CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models☆221Updated last year
- An optimized pipeline for DINet reducing inference latency for up to 60% 🚀. Kudos for the authors of the original repo for this amazing …☆106Updated last year
- ☆124Updated 10 months ago
- This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"☆368Updated last year
- Emotionally responsive Virtual Metahuman CV with Real-Time User Facial Emotion Detection (Unreal Engine 5).☆45Updated 2 months ago
- [ICCV 2023]ToonTalker: Cross-Domain Face Reenactment☆117Updated 4 months ago
- Pytorch reimplementation of audio driven face mesh or blendshape models, including Audio2Mesh, VOCA, etc☆14Updated 6 months ago
- ☆152Updated last year
- A service to convert audio to facial blendshapes for lipsyncing and facial performances.☆56Updated 3 months ago
- 中文到表情☆29Updated 2 years ago
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆68Updated 8 months ago
- Faster Talking Face Animation on Xeon CPU☆125Updated last year
- ☆44Updated last year
- Web-first SDK that provides real-time ARKit-compatible 52 blend shapes from a camera feed, video or image at 60 FPS using ML models.☆84Updated 2 years ago