speaking-portal-project-team-a / The-Speaking-Portal-Project
The objective of the Speaking Portal Project is to design, develop, and deploy a lip-sync animation API for the Kukarella text-to-speech (TTS) web application. This API will serve as an animation-generating add-on for this system so that the user can both listen to and watch their avatar speak the user provided text.
☆12Updated 2 years ago
Alternatives and similar repositories for The-Speaking-Portal-Project:
Users that are interested in The-Speaking-Portal-Project are comparing it to the libraries listed below
- (CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆29Updated 11 months ago
- Audio2Face Avatar with Riva SDK functionality☆73Updated 2 years ago
- canvas-based talking head model using viseme data☆31Updated last year
- AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.☆35Updated 2 years ago
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆70Updated 10 months ago
- Talking AI Avatar in Realtime☆15Updated last year
- Multivoice: Enhance your foreign-language movie and TV show experience with personalized dubbed versions. Our project uses voice cloning …☆26Updated last year
- A curated list of 'Talking Head Generation' resources. Features influential papers, groundbreaking algorithms, crucial GitHub repositorie…☆74Updated last year
- Talking Face Generation system☆19Updated last year
- ☆40Updated last year
- A curated list of resources of audio-driven talking face generation☆141Updated 2 years ago
- SadTalker gradio_demo.py file with code section that allows you to set the eye blink and pose reference videos for the software to use wh…☆11Updated last year
- Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"☆20Updated last year
- optimized wav2lip☆19Updated last year
- A quality zero-shot lipsync pipeline built with MuseTalk, LivePortrait, and CodeFormer.☆37Updated 7 months ago
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆15Updated last year
- Full version of wav2lip-onnx including face alignment and face enhancement and more...☆107Updated last week
- [ICCV 2023]ToonTalker: Cross-Domain Face Reenactment☆120Updated 6 months ago
- An optimized pipeline for DINet reducing inference latency for up to 60% 🚀. Kudos for the authors of the original repo for this amazing …☆105Updated last year
- ☆30Updated last year
- ☆39Updated last year
- Talking head video AI generator☆78Updated last year
- Wav2Lip UHQ Improvement with ControlNet 1.1☆72Updated last year
- Faster Talking Face Animation on Xeon CPU☆127Updated last year
- This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Mult…☆38Updated last year
- Wav2Lip-Emotion extends Wav2Lip to modify facial expressions of emotions via L1 reconstruction and pre-trained emotion objectives. We als…☆96Updated 2 years ago
- This project fixes the Wav2Lip project so that it can run on Python 3.9. Wav2Lip is a project that can be used to lip-sync videos to audi…☆17Updated last year
- Video to video translation via few shot voice cloning & audio-based lip sync☆25Updated 10 months ago
- ☆8Updated last year
- wav2lip-api☆11Updated 2 years ago