sieve-community / sievesyncLinks
A quality zero-shot lipsync pipeline built with MuseTalk, LivePortrait, and CodeFormer.
โ42Updated 10 months ago
Alternatives and similar repositories for sievesync
Users that are interested in sievesync are comparing it to the libraries listed below
Sorting:
- ๐คข LipSick: Fast, High Quality, Low Resource Lipsync Tool ๐คฎโ215Updated last year
- Full version of wav2lip-onnx including face alignment and face enhancement and more...โ132Updated last month
- Updated fork of wav2lip-hq allowing for the use of current ESRGAN modelsโ55Updated last year
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.โ75Updated last month
- Fast running Live Portrait with TensorRT and ONNX modelsโ167Updated last year
- Faster Talking Face Animation on Xeon CPUโ130Updated last year
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.โ124Updated last year
- Using Claude Sonnet 3.5 to forward (reverse) engineer code from VASA white paper - WIP - (this is for La Raza ๐ท)โ295Updated 9 months ago
- Simple and fast wav2lip using new 256x256 resolution trained onnx-converted model for inference. Easy installationโ42Updated 9 months ago
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".โ210Updated last year
- This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multโฆโ38Updated last year
- X-Portrait 2โ60Updated last week
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolutionโ349Updated last month
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portraiโฆโ235Updated 2 months ago
- [SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wildโ60Updated last year
- VASA-1โ102Updated last year
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generationโ231Updated 4 months ago
- [ICCV 2023]ToonTalker: Cross-Domain Face Reenactmentโ121Updated 9 months ago
- โ366Updated 11 months ago
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generationโ180Updated last year
- Wav2Lip UHQ Improvement with ControlNet 1.1โ74Updated 2 years ago
- Unlock Pose Diversity: Accurate and Efficient Implicit Keypoint-based Spatiotemporal Diffusion for Audio-driven Talking Portraitโ269Updated last week
- โ33Updated 8 months ago
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformersโ182Updated 2 weeks ago
- LLIA - Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Modelsโ110Updated last month
- โ53Updated last year
- The source code of "DINet: deformation inpainting network for realistic face visually dubbing on high resolution video."โ38Updated 11 months ago
- The API server version of the SadTalker project. Runs in Docker, 10 times faster than the original!โ138Updated 2 years ago
- The code for some apps built with Sieve.โ82Updated 8 months ago
- This is a project about talking faces. We use 576X576 sized facial images for training, which can generate 2k, 4k, 6k, and 8k digital humโฆโ55Updated last year