speaking-portal-project-team-a / The-Speaking-Portal-ProjectLinks
The objective of the Speaking Portal Project is to design, develop, and deploy a lip-sync animation API for the Kukarella text-to-speech (TTS) web application. This API will serve as an animation-generating add-on for this system so that the user can both listen to and watch their avatar speak the user provided text.
☆12Updated 2 years ago
Alternatives and similar repositories for The-Speaking-Portal-Project
Users that are interested in The-Speaking-Portal-Project are comparing it to the libraries listed below
Sorting:
- AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.☆35Updated 2 years ago
- (CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆29Updated last year
- SadTalker gradio_demo.py file with code section that allows you to set the eye blink and pose reference videos for the software to use wh…☆11Updated last year
- ☆13Updated 5 months ago
- Automatically generate a lip-synced avatar based off of a transcript and audio☆13Updated 2 years ago
- Multivoice: Enhance your foreign-language movie and TV show experience with personalized dubbed versions. Our project uses voice cloning …☆26Updated last year
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆72Updated 11 months ago
- Ai generated music video with Riffusion and Gradio☆21Updated 2 years ago
- optimized wav2lip☆19Updated last year
- ☆40Updated last year
- AI 3D avatar voice interface in browser. VAD -> STT -> LLM -> TTS -> VRM (Prototype/Proof-of-Concept)☆69Updated 2 years ago
- Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"☆20Updated last year
- GUI to sync video mouth movements to match audio, utilizing wav2lip-hq. Completed as part of a technical interview.☆11Updated last year
- wav2lip-api☆11Updated 2 years ago
- ☆12Updated last year
- canvas-based talking head model using viseme data☆31Updated last year
- One-shot face animation using webcam, capable of running in real time.☆37Updated last year
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆15Updated last year
- [SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild☆59Updated last year
- Generate video stories with AI ✨☆31Updated 9 months ago
- lipsync is a simple and updated Python library for lip synchronization, based on Wav2Lip. It synchronizes lips in videos and images based…☆124Updated 4 months ago
- ☆8Updated last year
- Wav2Lip UHQ Improvement with ControlNet 1.1☆73Updated last year
- ☆19Updated last year
- This is a project about talking faces. We use 576X576 sized facial images for training, which can generate 2k, 4k, 6k, and 8k digital hum…☆53Updated last year
- Video to video translation via few shot voice cloning & audio-based lip sync☆25Updated 11 months ago
- AI Lip Syncing application, deployed on Streamlit☆41Updated last year
- 基于DINet的推理服务,推理视频流和视频☆16Updated last year
- This project fixes the Wav2Lip project so that it can run on Python 3.9. Wav2Lip is a project that can be used to lip-sync videos to audi…☆17Updated last year
- Create your own personal avatar for Zoom or Discord chats or even live streaming...☆10Updated 8 months ago