jetfontanilla / azure-viseme-json
Example code on how to generate viseme json
☆13Updated 2 years ago
Alternatives and similar repositories for azure-viseme-json:
Users that are interested in azure-viseme-json are comparing it to the libraries listed below
- canvas-based talking head model using viseme data☆30Updated last year
- ☆11Updated last year
- A website which shows examples of the various blendshapes that can be animated using ARKit.☆18Updated 3 years ago
- Realtime VRM Humanoid Avatar Animation using Human Library and ThreeJS☆87Updated 2 years ago
- Creates video from TTS output and viseme images.☆11Updated 2 years ago
- ☆18Updated 3 years ago
- A software pipeline for creating realistic videos of people talking, using only images.☆38Updated 3 years ago
- ☆14Updated last year
- Speech Driven Lip sync for Web Browser☆27Updated 5 years ago
- AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.☆35Updated 2 years ago
- Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"☆19Updated last year
- A modified version of vid2vid for Speech2Video, Text2Video Paper☆35Updated last year
- Speech AI training and inference tools☆36Updated last year
- StoryDiffusion serverless worker☆16Updated 11 months ago
- 基于DINet的推理服务,推理视频流和视频☆15Updated last year
- Automatically generate a lip-synced avatar based off of a transcript and audio☆14Updated 2 years ago
- optimized wav2lip☆19Updated last year
- SadTalker gradio_demo.py file with code section that allows you to set the eye blink and pose reference videos for the software to use wh…☆11Updated last year
- Auto-Video maker handling many AI's☆10Updated last year
- Unofficial One-click Version of LivePortrait, with Webcam Support☆17Updated 8 months ago
- code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021☆8Updated 3 years ago
- ☆18Updated last year
- GUI to sync video mouth movements to match audio, utilizing wav2lip-hq. Completed as part of a technical interview.☆11Updated 11 months ago
- Gradio_demo.py with Blinking on Still Mode Video Creation☆12Updated last year
- Towards Robust Blind Face Restoration with Codebook Lookup Transformer☆28Updated last year
- Floral Diffusion is a custom diffusion model trained by jags using a DD 5.6 version☆26Updated 2 years ago
- 中文到表情☆29Updated 2 years ago
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆15Updated last year
- ☆29Updated last year
- AI 3D avatar voice interface in browser. VAD -> STT -> LLM -> TTS -> VRM (Prototype/Proof-of-Concept)☆66Updated last year