Francis-Rings / StableAvatarLinks
We present StableAvatar, the first end-to-end video diffusion transformer, which synthesizes infinite-length high-quality audio-driven avatar videos without any post-processing, conditioned on a reference image and audio.
☆147Updated last week
Alternatives and similar repositories for StableAvatar
Users that are interested in StableAvatar are comparing it to the libraries listed below
Sorting:
- KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution☆350Updated last week
- MagicTryOn is a video virtual try-on framework based on a large-scale video diffusion Transformer.☆427Updated last month
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆370Updated 6 months ago
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆361Updated this week
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆131Updated last week
- [CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation☆257Updated 6 months ago
- DICE-Talk is a diffusion-based emotional talking head generation method that can generate vivid and diverse emotions for speaking portrai…☆244Updated last week
- [ICCV 2025] Code Implementation of "ArtEditor: Learning Customized Instructional Image Editor from Few-Shot Examples"☆413Updated 3 months ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆365Updated last month
- ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head synthesis that supports both single and multi-signal control…☆373Updated 3 weeks ago
- 🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt☆284Updated 2 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆557Updated 2 months ago
- In-context subject-driven image generation while preserving foreground fidelity☆346Updated 2 months ago
- [ICCV2025] MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion☆231Updated last month
- [CVPR 2025] Official implementation of "AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models"☆324Updated 4 months ago
- [CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation☆287Updated 2 months ago
- Official implementation of MAGREF: Masked Guidance for Any-Reference Video Generation☆249Updated last month
- This is the official implementation of our paper: "MiniMax-Remover: Taming Bad Noise Helps Video Object Removal"☆323Updated 3 weeks ago
- [NeurIPS 2024] SHMT: Self-supervised Hierarchical Makeup Transfer via Latent Diffusion Models☆196Updated 6 months ago
- Official implementation of the paper "MusicInfuser: Making Video Diffusion Listen and Dance"☆75Updated 4 months ago
- [Official] Voost: A Unified and Scalable Diffusion Transformer for Bidirectional Virtual Try-On and Try-Off☆227Updated last week
- [ICLR 2025] Animate-X - PyTorch Implementation☆304Updated 6 months ago
- ☆349Updated 5 months ago
- All-round Creator and Editor☆233Updated 7 months ago
- homepage of DreamActor-M1☆62Updated last month
- [AAAI 2025] StoryWeaver: A Unified World Model for Knowledge-Enhanced Story Character Customization☆216Updated 4 months ago
- EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation☆278Updated this week
- ☆200Updated 4 months ago
- [ICLR 2025] Animate-X: Universal Character Image Animation with Enhanced Motion Representation☆339Updated 6 months ago
- Official repository of "TryOffAnyone: Tiled Cloth Generation from a Dressed Person"☆186Updated 6 months ago