Inferencer / LipSick
🤢 LipSick: Fast, High Quality, Low Resource Lipsync Tool 🤮
☆204Updated 8 months ago
Alternatives and similar repositories for LipSick:
Users that are interested in LipSick are comparing it to the libraries listed below
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".☆204Updated last year
- Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation☆213Updated this week
- Fast running Live Portrait with TensorRT and ONNX models☆157Updated 8 months ago
- ☆348Updated 7 months ago
- An optimized pipeline for DINet reducing inference latency for up to 60% 🚀. Kudos for the authors of the original repo for this amazing …☆106Updated last year
- Bring portraits to life via webcam!☆126Updated 8 months ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆406Updated 3 months ago
- PyTorch implementation of "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"☆211Updated last year
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆68Updated 9 months ago
- Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆198Updated 2 months ago
- Official code of CVPR '23 paper "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"☆313Updated last year
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generation☆179Updated last year
- Full version of wav2lip-onnx including face alignment and face enhancement and more...☆93Updated last month
- Emote Portrait Alive - using ai to reverse engineer code from white paper. (abandoned)☆180Updated 5 months ago
- The source code of "DINet: deformation inpainting network for realistic face visually dubbing on high resolution video."☆37Updated 6 months ago
- Official implementation of EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars☆365Updated last month
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.☆120Updated 8 months ago
- Using Claude Sonnet 3.5 to forward (reverse) engineer code from VASA white paper - WIP - (this is for La Raza 🎷)☆281Updated 4 months ago
- One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024☆60Updated 5 months ago
- R2-Talker: Realistic Real-Time Talking Head Synthesis with Hash Grid Landmarks Encoding and Progressive Multilayer Conditioning☆80Updated last year
- Using Claude Opus to reverse engineer code from MegaPortraits: One-shot Megapixel Neural Head Avatars☆89Updated 4 months ago
- [ICCV 2023]ToonTalker: Cross-Domain Face Reenactment☆117Updated 5 months ago
- [CVPR 2024] Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework.☆346Updated 2 months ago
- PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)☆363Updated 2 months ago
- High-Fidelity Lip-Syncing with Wav2Lip and Real-ESRGAN☆440Updated last year
- Updated fork of wav2lip-hq allowing for the use of current ESRGAN models☆54Updated 10 months ago
- Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆66Updated last month
- [CVPR2023] The implementation for "DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven Portraits Animation"☆460Updated 8 months ago
- Official project repo for paper "Speech Driven Video Editing via an Audio-Conditioned Diffusion Model"☆229Updated last year
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆491Updated 8 months ago