vinjn / llm-metahumanLinks
An open solution for AI-powered photorealistic digital humans.
☆123Updated last year
Alternatives and similar repositories for llm-metahuman
Users that are interested in llm-metahuman are comparing it to the libraries listed below
Sorting:
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆114Updated last month
- Audio2Face Avatar with Riva SDK functionality☆73Updated 2 years ago
- ☆225Updated last year
- This Unreal Engine sample project demonstrates how to bring Epic Games' MetaHuman digital characters to life using the Amazon Polly text-…☆49Updated 2 years ago
- A service to convert audio to facial blendshapes for lipsyncing and facial performances.☆88Updated last month
- ☆18Updated last year
- Emotionally responsive Virtual Metahuman CV with Real-Time User Facial Emotion Detection (Unreal Engine 5).☆45Updated 4 months ago
- Web interface to convert text to speech and route it to an Audio2Face streaming player.☆33Updated last year
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- This project is a digital human that can talk to you and is animated based on your questions. It uses the Nvidia API endpoint Meta llama3…☆55Updated 10 months ago
- (CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆23Updated last year
- An MVP that uses Google STT, OpenAI LLM, Nvidia Audio2Face☆64Updated 2 years ago
- ☆46Updated last year
- Send mediapipe data to unreal engine with livelink.☆45Updated 2 years ago
- Talk with AI-powered detailed 3D avatar. Use LLM, TTS, Unity, and lip sync to bring the character to life.☆98Updated 5 months ago
- 中文到表情☆29Updated 3 years ago
- Talking AI Avatar in Realtime☆15Updated last year
- A multi GPU audio to face animation AI model trainer for your iPhone ARKit data.☆29Updated last week
- NeuroSync Audio to face animation local inference helper code.☆57Updated 2 weeks ago
- The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation…☆96Updated last week
- Updated fork of wav2lip-hq allowing for the use of current ESRGAN models☆54Updated last year
- ☆95Updated 3 years ago
- ☆194Updated last year
- This project fixes the Wav2Lip project so that it can run on Python 3.9. Wav2Lip is a project that can be used to lip-sync videos to audi…☆17Updated last year
- The API server version of the SadTalker project. Runs in Docker, 10 times faster than the original!☆135Updated last year
- ☆127Updated last year
- AvaChat - is a realtime AI chat demo with animated talking heads - it uses Large Language Models via api (OpenAI and Claude) as text inpu…☆100Updated last month
- A collection of AI model endpoints you can run locally for a real-time audio2face system. Toy demonstration, not for production. Use this…☆18Updated last week
- ☆52Updated last year
- Pytorch reimplementation of audio driven face mesh or blendshape models, including Audio2Mesh, VOCA, etc☆14Updated 8 months ago