warmshao / FasterLivePortraitLinks
Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!
☆1,023Updated 5 months ago
Alternatives and similar repositories for FasterLivePortrait
Users that are interested in FasterLivePortrait are comparing it to the libraries listed below
Sorting:
- Diffusion-based Portrait and Animal Animation☆846Updated 2 weeks ago
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆630Updated last month
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆531Updated 2 months ago
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,601Updated 4 months ago
- ☆643Updated last month
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆574Updated 6 months ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆375Updated last month
- [CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"☆1,600Updated 3 months ago
- Memory-Guided Diffusion for Expressive Talking Video Generation☆1,070Updated 4 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆375Updated 4 months ago
- VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior☆803Updated 2 years ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆438Updated last month
- Official implementation of "MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling"☆1,562Updated 6 months ago
- Select a portrait, click to move the head around (please use your own space / GPU!)☆903Updated 4 months ago
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance☆2,491Updated last month
- ☆421Updated last year
- ☆1,044Updated 7 months ago
- Official implementation of "FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on"☆602Updated 10 months ago
- JoyHallo: Digital human model for Mandarin☆519Updated 3 months ago
- ☆1,964Updated last week
- ViViD: Video Virtual Try-on using Diffusion Models☆556Updated last year
- ComfyUI nodes for LivePortrait☆2,108Updated last year
- [SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head☆875Updated 3 months ago
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆450Updated 2 months ago
- This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audi…☆923Updated 3 months ago
- You can using EchoMimic in ComfyUI☆680Updated 3 months ago
- The official HelloMeme GitHub site☆626Updated 5 months ago
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆284Updated 4 months ago
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆433Updated 4 months ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆759Updated last year