jetfontanilla / canvas-talking-head-modelLinks
canvas-based talking head model using viseme data
☆31Updated last year
Alternatives and similar repositories for canvas-talking-head-model
Users that are interested in canvas-talking-head-model are comparing it to the libraries listed below
Sorting:
- AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.☆36Updated 2 years ago
- Example code on how to generate viseme json☆13Updated 2 years ago
- This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Mult…☆38Updated last year
- AvaChat - is a realtime AI chat demo with animated talking heads - it uses Large Language Models via api (OpenAI and Claude) as text inpu…☆103Updated last month
- This project is a digital human that can talk to you and is animated based on your questions. It uses the Nvidia API endpoint Meta llama3…☆58Updated 10 months ago
- SadTalker gradio_demo.py file with code section that allows you to set the eye blink and pose reference videos for the software to use wh…☆11Updated 2 years ago
- GUI to sync video mouth movements to match audio, utilizing wav2lip-hq. Completed as part of a technical interview.☆11Updated last year
- Talking head video AI generator☆78Updated last year
- (CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆30Updated last year
- ☆30Updated last year
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆72Updated last week
- ☆18Updated 3 years ago
- optimized wav2lip☆19Updated last year
- ☆19Updated 2 years ago
- Creates video from TTS output and viseme images.☆12Updated 3 years ago
- Generate video stories with AI ✨☆32Updated 9 months ago
- AI-Powered animation tool☆51Updated 3 years ago
- Project that allows Realtime recording of the audio, and lip syncs the image.☆75Updated last year
- One-shot face animation using webcam, capable of running in real time.☆37Updated last year
- AI 3D avatar voice interface in browser. VAD -> STT -> LLM -> TTS -> VRM (Prototype/Proof-of-Concept)☆70Updated 2 years ago
- This project is based on SadTalker to implement video lip synthesis.☆14Updated last year
- AI Lip Syncing application, deployed on Streamlit☆42Updated last year
- Speech Driven Lip sync for Web Browser☆27Updated 6 years ago
- wav2lip-api☆11Updated 2 years ago
- Automatically generate a lip-synced avatar based off of a transcript and audio☆13Updated 2 years ago
- Uses ChatGPT, TTS, and Stable Diffusion to automatically generate videos☆29Updated 2 years ago
- ☆28Updated last year
- ☆11Updated last year
- Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"☆20Updated last year
- ☆149Updated 2 years ago