sieve-community / sievesyncLinks
A quality zero-shot lipsync pipeline built with MuseTalk, LivePortrait, and CodeFormer.
☆41Updated 9 months ago
Alternatives and similar repositories for sievesync
Users that are interested in sievesync are comparing it to the libraries listed below
Sorting:
- Full version of wav2lip-onnx including face alignment and face enhancement and more...☆130Updated last month
- 🤢 LipSick: Fast, High Quality, Low Resource Lipsync Tool 🤮☆214Updated last year
- Fast running Live Portrait with TensorRT and ONNX models☆165Updated 11 months ago
- Updated fork of wav2lip-hq allowing for the use of current ESRGAN models☆55Updated last year
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆73Updated 3 weeks ago
- Faster Talking Face Animation on Xeon CPU☆129Updated last year
- [SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild☆60Updated last year
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.☆125Updated last year
- ☆32Updated 8 months ago
- Simple and fast wav2lip using new 256x256 resolution trained onnx-converted model for inference. Easy installation☆41Updated 9 months ago
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".☆209Updated last year
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆220Updated 2 months ago
- Using Claude Sonnet 3.5 to forward (reverse) engineer code from VASA white paper - WIP - (this is for La Raza 🎷)☆295Updated 8 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆333Updated 3 weeks ago
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation☆230Updated 3 months ago
- This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Mult…☆38Updated last year
- [ICCV2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆285Updated 2 weeks ago
- Wav2Lip UHQ Improvement with ControlNet 1.1☆73Updated last year
- The source code of "DINet: deformation inpainting network for realistic face visually dubbing on high resolution video."☆38Updated 10 months ago
- Unlock Pose Diversity: Accurate and Efficient Implicit Keypoint-based Spatiotemporal Diffusion for Audio-driven Talking Portrait☆269Updated last month
- LLIA - Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Models☆79Updated last month
- X-Portrait 2☆60Updated 8 months ago
- ☆43Updated last year
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆514Updated 11 months ago
- [CVPR 2025] HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆264Updated last month
- Emote Portrait Alive - using ai to reverse engineer code from white paper. (abandoned)☆181Updated 8 months ago
- VASA-1☆102Updated 11 months ago
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generation☆180Updated last year
- (CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆31Updated last year
- ☆53Updated last year