Francis-Rings / StableAvatarLinks
We present StableAvatar, the first end-to-end video diffusion transformer, which synthesizes infinite-length high-quality audio-driven avatar videos without any post-processing, conditioned on a reference image and audio.
☆1,154Updated last week
Alternatives and similar repositories for StableAvatar
Users that are interested in StableAvatar are comparing it to the libraries listed below
Sorting:
- SkyReels-A2: Compose anything in video diffusion transformers☆691Updated 6 months ago
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning☆1,020Updated 2 months ago
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆375Updated 4 months ago
- Implementation of "Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length"☆942Updated last week
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆490Updated 3 months ago
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆284Updated 4 months ago
- Stream-Omni is a GPT-4o-like language-vision-speech chatbot that simultaneously supports interaction across various modality combinations…☆374Updated 6 months ago
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆689Updated 3 months ago
- MagicTryOn is a video virtual try-on framework based on a large-scale video diffusion Transformer.☆490Updated 3 weeks ago
- [CVPR2025] We present StableAnimator, the first end-to-end ID-preserving video diffusion framework, which synthesizes high-quality videos…☆1,397Updated 2 months ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆436Updated last month
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆573Updated 6 months ago
- ☆700Updated last month
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆375Updated 3 weeks ago
- ☆643Updated last month
- [AAAI 2026] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆667Updated 3 weeks ago
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆433Updated 3 months ago
- ☆1,044Updated 7 months ago
- [NeurIPS 2025] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication☆405Updated 3 months ago
- Streamlining Cartoon Production with Generative Post-Keyframing☆517Updated 3 months ago
- talking-face video editing☆411Updated 9 months ago
- [SIGGRAPH Asia 2025] DreamO: A Unified Framework for Image Customization☆1,733Updated 4 months ago
- [SIGGRAPH 2025] LAM: Large Avatar Model for One-shot Animatable Gaussian Head☆875Updated 3 months ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,196Updated 2 months ago
- ☆1,744Updated 4 months ago
- [ICCV 2025] Code Implementation of "ArtEditor: Learning Customized Instructional Image Editor from Few-Shot Examples"☆428Updated 7 months ago
- [CVPR 2025] Official implementation of "AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models"☆328Updated 8 months ago
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis☆1,601Updated 3 months ago
- JoyHallo: Digital human model for Mandarin☆518Updated 2 months ago
- Diffusion-based Portrait and Animal Animation☆845Updated last week