warmshao / FasterLivePortrait
Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!
☆802Updated last month
Alternatives and similar repositories for FasterLivePortrait:
Users that are interested in FasterLivePortrait are comparing it to the libraries listed below
- Memory-Guided Diffusion for Expressive Talking Video Generation☆763Updated 2 months ago
- Diffusion-based Portrait and Animal Animation☆710Updated 3 weeks ago
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆488Updated 7 months ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆347Updated 2 months ago
- ☆399Updated 7 months ago
- VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior☆787Updated last year
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆422Updated last week
- ☆519Updated 3 months ago
- Official implementation of "FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on"☆488Updated last month
- ☆715Updated last month
- ☆413Updated 6 months ago
- Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆182Updated 2 months ago
- FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds. AI拟音大师,给你的无声视频添加生动而且同步的音效 😝☆558Updated 7 months ago
- Bring portraits to life via Monitor!☆274Updated 7 months ago
- MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes; NeurIPS 2024; Official code☆650Updated 5 months ago
- LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control☆451Updated 6 months ago
- [Siggraph Asia 2024] Follow-Your-Emoji: This repo is the official implementation of "Follow-Your-Emoji: Fine-Controllable and Expressive …☆374Updated 6 months ago
- The official HelloMeme GitHub site☆585Updated last month
- Using Claude Sonnet 3.5 to forward (reverse) engineer code from VASA white paper - WIP - (this is for La Raza 🎷)☆281Updated 4 months ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆513Updated this week
- talking-face video editing☆286Updated 3 weeks ago
- StoryMaker: Towards consistent characters in text-to-image generation☆671Updated 3 months ago
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆616Updated this week
- Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Video Diffusion Transformer☆1,149Updated last week
- [ICLR 2025] Animate-X: Universal Character Image Animation with Enhanced Motion Representation☆264Updated last month
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆730Updated 3 months ago
- JoyHallo: Digital human model for Mandarin☆467Updated 4 months ago
- ComfyUI nodes for LivePortrait☆1,897Updated 7 months ago
- Official implementation of EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars☆363Updated 3 weeks ago
- [CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"☆1,434Updated 6 months ago