WeChatCV / Stand-InLinks
Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.
☆722Updated last month
Alternatives and similar repositories for Stand-In
Users that are interested in Stand-In are comparing it to the libraries listed below
Sorting:
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆498Updated 5 months ago
- MoCha: End-to-End Video Character Replacement without Structural Guidance☆609Updated 2 weeks ago
- SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation☆568Updated last month
- Pusa: Thousands Timesteps Video Diffusion Model☆672Updated 4 months ago
- ☆546Updated last month
- DreamID-V: Bridging the Image-to-Video Gap for High-Fidelity Face Swapping via Diffusion Transformer☆485Updated 2 weeks ago
- The official code implementation of the paper "OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data."☆426Updated 7 months ago
- ☆1,046Updated 8 months ago
- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning☆1,111Updated last week
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆582Updated 7 months ago
- Offical Implementation of SCAIL: Towards Studio-Grade Character Animation via In-Context Learning of 3D-Consistent Pose Representations☆783Updated 3 weeks ago
- [ICCV 2025] Official Pytorch Implementation of FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait.☆453Updated 2 months ago
- Directly Aligning the Full Diffusion Trajectory with Fine-Grained Human Preference