Francis-Rings / StableAvatarLinks
We present StableAvatar, the first end-to-end video diffusion transformer, which synthesizes infinite-length high-quality audio-driven avatar videos without any post-processing, conditioned on a reference image and audio.
☆1,199Updated 2 weeks ago
Alternatives and similar repositories for StableAvatar
Users that are interested in StableAvatar are comparing it to the libraries listed below
Sorting:
- SkyReels-A2: Compose anything in video diffusion transformers☆701Updated 8 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆582Updated 8 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆376Updated 2 weeks ago
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆498Updated 5 months ago
- [CVPR2025] We present StableAnimator, the first end-to-end ID-preserving video diffusion framework, which synthesizes high-quality videos…☆1,407Updated 4 months ago
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆725Updated last month
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆292Updated 6 months ago
- ☆1,782Updated 6 months ago
- Stream-Omni is a GPT-4o-like language-vision-speech chatbot that simultaneously supports interaction across various modality combinations…☆383Updated 7 months ago
- MagicTryOn is a video virtual try-on framework based on a large-scale video diffusion Transformer.☆510Updated last week
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning☆1,127Updated last week
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆457Updated 2 months ago
- [AAAI 2026] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆755Updated this week
- ☆650Updated 2 months ago
- Official code for StoryMem: Multi-shot Long Video Storytelling with Memory☆638Updated 2 weeks ago
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆445Updated 5 months ago
- [NeurIPS 2025] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆418Updated 4 months ago
- ☆714Updated 3 months ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,204Updated 3 months ago
- [CVPR 2025] Official implementation of "AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models"☆328Updated 9 months ago
- PersonaLive! : Expressive Portrait Image Animation for Live Streaming☆1,583Updated last month
- [ICCV2025] MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion☆243Updated 7 months ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆376Updated 2 months ago
- Diffusion-based Portrait and Animal Animation☆854Updated last month
- ☆1,046Updated 8 months ago
- [ICCV 2025] Code Implementation of "ArtEditor: Learning Customized Instructional Image Editor from Few-Shot Examples"☆431Updated 9 months ago
- [NeurIPS 2025] Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation☆2,794Updated last month
- We present FlashPortrait, an end-to-end video diffusion transformer capable of synthesizing ID-preserving, infinite-length videos while a…☆434Updated 3 weeks ago
- Memory-Guided Diffusion for Expressive Talking Video Generation☆1,076Updated 6 months ago
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,617Updated last week