speaking-portal-project-team-a / The-Speaking-Portal-ProjectLinks
The objective of the Speaking Portal Project is to design, develop, and deploy a lip-sync animation API for the Kukarella text-to-speech (TTS) web application. This API will serve as an animation-generating add-on for this system so that the user can both listen to and watch their avatar speak the user provided text.
☆12Updated 2 years ago
Alternatives and similar repositories for The-Speaking-Portal-Project
Users that are interested in The-Speaking-Portal-Project are comparing it to the libraries listed below
Sorting:
- Multivoice: Enhance your foreign-language movie and TV show experience with personalized dubbed versions. Our project uses voice cloning …☆26Updated 2 years ago
- AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.☆36Updated 2 years ago
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆73Updated 3 months ago
- SadTalker gradio_demo.py file with code section that allows you to set the eye blink and pose reference videos for the software to use wh…☆11Updated 2 years ago
- AvaChat - is a realtime AI chat demo with animated talking heads - it uses Large Language Models via api (OpenAI and Claude) as text inpu…☆110Updated 5 months ago
- AI Lip Syncing application, deployed on Streamlit☆43Updated last year
- An open source chat bot architecture for voice/vision (and multimodal) assistants, local(CPU/GPU bound) and remote(I/O bound) to run.☆78Updated last week
- Listen, transcribe, reply - Voice Assistant using OpenAI & ElevenLabs API's☆14Updated 2 years ago
- canvas-based talking head model using viseme data☆32Updated 2 years ago
- Talking Face Generation system☆19Updated last year
- optimized wav2lip☆18Updated last year
- Automatically generate a lip-synced avatar based off of a transcript and audio☆13Updated 2 years ago
- AI 3D avatar voice interface in browser. VAD -> STT -> LLM -> TTS -> VRM (Prototype/Proof-of-Concept)☆71Updated 2 years ago
- Faster Talking Face Animation on Xeon CPU☆130Updated last year
- Wav2Lip UHQ Improvement with ControlNet 1.1☆74Updated 2 years ago
- ☆39Updated last year
- ☆40Updated last year
- ☆31Updated 10 months ago
- Talking head video AI generator☆79Updated last year
- ☆42Updated last year
- A curated list of 'Talking Head Generation' resources. Features influential papers, groundbreaking algorithms, crucial GitHub repositorie…☆77Updated last year
- lipsync is a simple and updated Python library for lip synchronization, based on Wav2Lip. It synchronizes lips in videos and images based…☆134Updated 8 months ago
- This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Mult…☆39Updated last year
- VoiceCraftAI is a revolutionary AI tool to dub videos into multiple regional languages and lip-sync at the same time.☆69Updated last year
- ☆36Updated last year
- (CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆32Updated last year
- This project is a digital human that can talk to you and is animated based on your questions. It uses the Nvidia API endpoint Meta llama3…☆61Updated last year
- DoyenTalker uses deep learning techniques to generate personalized avatar videos that speak user-provided text in a specified voice. The …☆13Updated last year
- This project fixes the Wav2Lip project so that it can run on Python 3.9. Wav2Lip is a project that can be used to lip-sync videos to audi…☆17Updated 2 years ago
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆14Updated last year