speaking-portal-project-team-a / The-Speaking-Portal-ProjectLinks
The objective of the Speaking Portal Project is to design, develop, and deploy a lip-sync animation API for the Kukarella text-to-speech (TTS) web application. This API will serve as an animation-generating add-on for this system so that the user can both listen to and watch their avatar speak the user provided text.
☆13Updated 2 years ago
Alternatives and similar repositories for The-Speaking-Portal-Project
Users that are interested in The-Speaking-Portal-Project are comparing it to the libraries listed below
Sorting:
- AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.☆36Updated 3 years ago
- AI 3D avatar voice interface in browser. VAD -> STT -> LLM -> TTS -> VRM (Prototype/Proof-of-Concept)☆73Updated 2 years ago
- Multivoice: Enhance your foreign-language movie and TV show experience with personalized dubbed versions. Our project uses voice cloning …☆26Updated 2 years ago
- Talking head video AI generator☆82Updated 2 years ago
- AvaChat - is a realtime AI chat demo with animated talking heads - it uses Large Language Models via api (OpenAI and Claude) as text inpu…☆113Updated 8 months ago
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆75Updated 7 months ago
- AI Lip Syncing application, deployed on Streamlit☆43Updated last year
- ☆31Updated last year
- SadTalker gradio_demo.py file with code section that allows you to set the eye blink and pose reference videos for the software to use wh…☆11Updated 2 years ago
- lipsync is a simple and updated Python library for lip synchronization, based on Wav2Lip. It synchronizes lips in videos and images based…☆142Updated last year
- This project fixes the Wav2Lip project so that it can run on Python 3.9. Wav2Lip is a project that can be used to lip-sync videos to audi…☆17Updated 2 years ago
- optimized wav2lip☆18Updated 2 years ago
- Harness the power of NVIDIA technologies and LangChain to create dynamic avatars from live speech, integrating RIVA ASR and TTS with Audi…☆96Updated last year
- canvas-based talking head model using viseme data☆32Updated 2 years ago
- Faster Talking Face Animation on Xeon CPU☆130Updated 2 years ago
- The API server version of the SadTalker project. Runs in Docker, 10 times faster than the original!☆145Updated 2 years ago
- Long-Inference, High Quality Synthetic Speaker (AI avatar/ AI presenter)☆262Updated 2 years ago
- Audio2Face Avatar with Riva SDK functionality☆75Updated 3 years ago
- Listen, transcribe, reply - Voice Assistant using OpenAI & ElevenLabs API's☆14Updated 2 years ago
- A Full-Duplex Open-Domain Dialogue Agent with Continuous Turn-Taking Behavior☆36Updated 2 years ago
- ☆12Updated 2 years ago
- ☆32Updated 2 years ago
- A modified version of vid2vid for Speech2Video, Text2Video Paper☆36Updated 2 years ago
- A curated list of resources of audio-driven talking face generation☆145Updated 3 years ago
- Full version of wav2lip-onnx including face alignment and face enhancement and more...☆151Updated 7 months ago
- This project is a digital human that can talk to you and is animated based on your questions. It uses the Nvidia API endpoint Meta llama3…☆63Updated last year
- GUI to sync video mouth movements to match audio, utilizing wav2lip-hq. Completed as part of a technical interview.☆12Updated last year
- This is a project about talking faces. We use 576X576 sized facial images for training, which can generate 2k, 4k, 6k, and 8k digital hum…☆55Updated last year
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆15Updated 2 years ago
- A curated list of 'Talking Head Generation' resources. Features influential papers, groundbreaking algorithms, crucial GitHub repositorie…☆76Updated 2 years ago