Bomou-AI / Talking-Head
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
☆33Updated 2 years ago
Alternatives and similar repositories for Talking-Head:
Users that are interested in Talking-Head are comparing it to the libraries listed below
- ☆40Updated last year
- Speech to Facial Animation using GANs☆41Updated 3 years ago
- GUI to sync video mouth movements to match audio, utilizing wav2lip-hq. Completed as part of a technical interview.☆11Updated 10 months ago
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆68Updated 8 months ago
- One-shot face animation using webcam, capable of running in real time.☆36Updated 9 months ago
- A curated list of 'Talking Head Generation' resources. Features influential papers, groundbreaking algorithms, crucial GitHub repositorie…☆75Updated last year
- A software pipeline for creating realistic videos of people talking, using only images.☆39Updated 3 years ago
- AI Lip Syncing application, deployed on Streamlit☆38Updated last year
- optimized wav2lip☆19Updated last year
- an improved version of Real-time-voice-cloning☆48Updated last year
- lipsync is a simple and updated Python library for lip synchronization, based on Wav2Lip. It synchronizes lips in videos and images based…☆110Updated 2 months ago
- Audio-Visual Lip Synthesis via Intermediate Landmark Representation☆16Updated last year
- Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"☆19Updated last year
- (CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆28Updated 9 months ago
- Cloned repository from Hugging Face Spaces (CVPR 2022 Demo)☆54Updated 2 years ago
- Wav2Lip UHQ Improvement with ControlNet 1.1☆73Updated last year
- This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Mult…☆37Updated last year
- code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021☆8Updated 3 years ago
- Audio-driven facial animation generator with BiLSTM used for transcribing the speech and web interface displaying the avatar and the anim…☆35Updated 2 years ago
- canvas-based talking head model using viseme data☆30Updated last year
- [ICCV 2023]ToonTalker: Cross-Domain Face Reenactment☆117Updated 4 months ago
- ☆11Updated last year
- ☆27Updated last year
- Wav2Lip-Emotion extends Wav2Lip to modify facial expressions of emotions via L1 reconstruction and pre-trained emotion objectives. We als…☆96Updated 2 years ago
- Automatically generate a lip-synced avatar based off of a transcript and audio☆14Updated 2 years ago
- Uses ChatGPT, TTS, and Stable Diffusion to automatically generate videos☆29Updated 2 years ago
- SadTalker gradio_demo.py file with code section that allows you to set the eye blink and pose reference videos for the software to use wh…☆11Updated last year
- PyTorch implementation of NEUTART, a system that creates photorealistic talking avatars from an input text transcription.☆33Updated last week
- ☆16Updated last year