taherfattahi / nvidia-human-ai-lipsyncLinks
This project is a digital human that can talk to you and is animated based on your questions. It uses the Nvidia API endpoint Meta llama3-70b to generate responses, Eleven Labs to generate voice and Rhubarb Lip Sync to generate the lip sync.
☆63Updated last year
Alternatives and similar repositories for nvidia-human-ai-lipsync
Users that are interested in nvidia-human-ai-lipsync are comparing it to the libraries listed below
Sorting:
- AvaChat - is a realtime AI chat demo with animated talking heads - it uses Large Language Models via api (OpenAI and Claude) as text inpu…☆113Updated 8 months ago
- Talking head video AI generator☆82Updated 2 years ago
- (CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆33Updated last year
- ☆40Updated 2 years ago
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆75Updated 7 months ago
- An open solution for AI-powered photorealistic digital humans.☆135Updated 6 months ago
- AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.☆36Updated 3 years ago
- Generating 3D Cartoon Avatars Using 2D Facial Images☆33Updated 2 years ago
- Avatar Generation For Characters and Game Assets Using Deep Fakes☆230Updated last year
- [SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild☆61Updated last year
- Talking AI Avatar in Realtime☆24Updated last year
- ☆45Updated 2 years ago
- ☆43Updated 2 years ago
- AI Lip Syncing application, deployed on Streamlit☆43Updated last year
- Project that allows Realtime recording of the audio, and lip syncs the image.☆108Updated last year
- Audio2Face Avatar with Riva SDK functionality☆75Updated 3 years ago
- ☆78Updated 2 years ago
- One-shot face animation using webcam, capable of running in real time.☆41Updated last year
- Talk with AI-powered detailed 3D avatar. Use LLM, TTS, Unity, and lip sync to bring the character to life.☆161Updated last year
- ☆18Updated last year
- AI 3D avatar voice interface in browser. VAD -> STT -> LLM -> TTS -> VRM (Prototype/Proof-of-Concept)☆73Updated 2 years ago
- Harness the power of NVIDIA technologies and LangChain to create dynamic avatars from live speech, integrating RIVA ASR and TTS with Audi…☆96Updated last year
- ✨ Experience the enchantment of Story Blocks: an open-source project merging AI text generation and image synthesis to create captivating…☆66Updated 2 years ago
- ☆34Updated last year
- Gradio_demo.py with Blinking on Still Mode Video Creation☆12Updated 2 years ago
- ☆33Updated last year
- SadTalker gradio_demo.py file with code section that allows you to set the eye blink and pose reference videos for the software to use wh…☆11Updated 2 years ago
- ☆36Updated 2 years ago
- ☆20Updated 2 years ago
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.☆128Updated last month