Inferencer / LipSickLinks
🤢 LipSick: Fast, High Quality, Low Resource Lipsync Tool 🤮
☆217Updated last year
Alternatives and similar repositories for LipSick
Users that are interested in LipSick are comparing it to the libraries listed below
Sorting:
- Fast running Live Portrait with TensorRT and ONNX models☆170Updated last year
- ☆371Updated last year
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".☆212Updated last year
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation☆236Updated 6 months ago
- Official implementation of EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars☆383Updated 5 months ago
- wip - running some training with overfitting - https://wandb.ai/snoozie/vasa-overfitting☆296Updated last week
- Emote Portrait Alive - using ai to reverse engineer code from white paper. (abandoned)☆182Updated 11 months ago
- Full version of wav2lip-onnx including face alignment and face enhancement and more...☆139Updated 3 months ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆436Updated last week
- An optimized pipeline for DINet reducing inference latency for up to 60% 🚀. Kudos for the authors of the original repo for this amazing …☆108Updated 2 years ago
- Official code of CVPR '23 paper "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"☆322Updated 2 years ago
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generation☆180Updated last year
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.☆125Updated last year
- PyTorch implementation of "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"☆214Updated 2 years ago
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆494Updated 2 months ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.