Inferencer / LipSickLinks
๐คข LipSick: Fast, High Quality, Low Resource Lipsync Tool ๐คฎ
โ213Updated 11 months ago
Alternatives and similar repositories for LipSick
Users that are interested in LipSick are comparing it to the libraries listed below
Sorting:
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".โ208Updated last year
- Fast running Live Portrait with TensorRT and ONNX modelsโ162Updated 10 months ago
- โ359Updated 10 months ago
- An optimized pipeline for DINet reducing inference latency for up to 60% ๐. Kudos for the authors of the original repo for this amazing โฆโ107Updated last year
- Full version of wav2lip-onnx including face alignment and face enhancement and more...โ126Updated 2 weeks ago
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.โ72Updated this week
- Official implementation of EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatarsโ378Updated 2 months ago
- Emote Portrait Alive - using ai to reverse engineer code from white paper. (abandoned)โ181Updated 7 months ago
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generationโ229Updated 2 months ago
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generationโ180Updated last year
- Official code of CVPR '23 paper "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"โ318Updated last year
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementationโ418Updated 5 months ago
- PyTorch implementation of "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"โ212Updated last year
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.โ124Updated 11 months ago
- Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.โ261Updated 4 months ago
- Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesisโ358Updated 5 months ago
- Using Claude Sonnet 3.5 to forward (reverse) engineer code from VASA white paper - WIP - (this is for La Raza ๐ท)โ294Updated 7 months ago
- Faster Talking Face Animation on Xeon CPUโ129Updated last year
- Using Claude Opus to reverse engineer code from MegaPortraits: One-shot Megapixel Neural Head Avatarsโ93Updated 7 months ago
- The source code of "DINet: deformation inpainting network for realistic face visually dubbing on high resolution video."โ38Updated 9 months ago
- One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024โ61Updated 8 months ago
- โ159Updated last year
- R2-Talker: Realistic Real-Time Talking Head Synthesis with Hash Grid Landmarks Encoding and Progressive Multilayer Conditioningโ80Updated last year
- [ICCV 2023]ToonTalker: Cross-Domain Face Reenactmentโ120Updated 7 months ago
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portraiโฆโ209Updated last month
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animationโ256Updated 2 weeks ago
- Official code for ICCV 2023 paper: "Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation".โ292Updated 3 weeks ago
- [CVPR 2024] Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework.โ351Updated 4 months ago
- Audio-Visual Generative Adversarial Network for Face Reenactmentโ158Updated last year
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"โ512Updated 10 months ago