Francis-Rings / StableAvatarLinks
We present StableAvatar, the first end-to-end video diffusion transformer, which synthesizes infinite-length high-quality audio-driven avatar videos without any post-processing, conditioned on a reference image and audio.
☆1,198Updated last week
Alternatives and similar repositories for StableAvatar
Users that are interested in StableAvatar are comparing it to the libraries listed below
Sorting:
- SkyReels-A2: Compose anything in video diffusion transformers☆700Updated 7 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆377Updated last week
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning☆1,111Updated last week
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆292Updated 5 months ago
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆498Updated 5 months ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆453Updated 2 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆582Updated 7 months ago
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆722Updated last month
- talking-face video editing☆419Updated 11 months ago
- MagicTryOn is a video virtual try-on framework based on a large-scale video diffusion Transformer.☆505Updated this week
- Stream-Omni is a GPT-4o-like language-vision-speech chatbot that simultaneously supports interaction across various modality combinations…☆382Updated 7 months ago
- [NeurIPS 2025] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆417Updated 4 months ago
- [CVPR2025] We present StableAnimator, the first end-to-end ID-preserving video diffusion framework, which synthesizes high-quality videos…☆1,407Updated 4 months ago
- [ICCV2025] MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion☆243Updated 7 months ago
- ☆650Updated 2 months ago
- [AAAI 2026] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆739Updated this week
- Implementation of "Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length"☆1,524Updated this week
- [CVPR 2025] Official implementation of "AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models"☆329Updated 9 months ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆376Updated 2 months ago
- 开源的LstmSync数字人泛化模型,只做最好的泛化模型!☆139Updated last week
- Official code for StoryMem: Multi-shot Long Video Storytelling with Memory☆632Updated last week
- ☆1,783Updated 5 months ago
- SoulX-FlashTalk is the first 14B model to achieve sub-second start-up latency (0.87s) while maintaining a real-time throughput of 32 FPS …☆396Updated this week
- Diffusion-based Portrait and Animal Animation☆853Updated last month
- PersonaLive! : Expressive Portrait Image Animation for Live Streaming☆1,509Updated last month
- [ICCV 2025] Code Implementation of "ArtEditor: Learning Customized Instructional Image Editor from Few-Shot Examples"☆431Updated 9 months ago
- [CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation☆262Updated last year
- One-to-All Animation: Alignment-Free Character Animation and Image Pose Transfer☆435Updated last month
- We present FlashPortrait, an end-to-end video diffusion transformer capable of synthesizing ID-preserving, infinite-length videos while a…☆432Updated 3 weeks ago
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆332Updated last month