alibaba / animate-anythingLinks
Fine-Grained Open Domain Image Animation with Motion Guidance
☆947Updated last year
Alternatives and similar repositories for animate-anything
Users that are interested in animate-anything are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videos…☆973Updated last year
- Official Code for MotionCtrl [SIGGRAPH 2024]☆1,463Updated 8 months ago
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]☆629Updated last year
- [AAAI 2025] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via S…☆905Updated last month
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆754Updated 10 months ago
- [IJCV 2024] LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Models☆939Updated 11 months ago
- [ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction☆943Updated 11 months ago
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRA☆1,618Updated last year
- [ICML 2024] MagicPose(also known as MagicDance): Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion☆769Updated last year
- [AAAI 2024] Follow-Your-Pose: This repo is the official implementation of "Follow-Your-Pose : Pose-Guided Text-to-Video Generation using …☆1,344Updated last year
- Stable Video Diffusion Training Code and Extensions.☆720Updated last year
- Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllability☆946Updated last year
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆499Updated last year
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"☆847Updated 2 years ago
- [SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data☆651Updated last year
- 📹 A more flexible framework that can generate videos at any resolution and creates videos from images.☆1,502Updated this week
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆531Updated last year
- ICLR 2024 (Spotlight)☆775Updated last year
- [ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models☆695Updated last year
- Official implementation of FIFO-Diffusion: Generating Infinite Videos from Text without Training (NeurIPS 2024)☆475Updated last year
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,960Updated last year
- InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation 🔥☆1,973Updated last year
- [ICLR 2025] Official implementation of MotionClone: Training-Free Motion Cloning for Controllable Video Generation☆508Updated 4 months ago
- [CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation☆778Updated last year
- [CVPR2024, Highlight] Official code for DragDiffusion☆1,237Updated last year
- [CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text☆1,604Updated 7 months ago
- ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment☆1,263Updated last year
- [ICCV 2023] Consistent Image Synthesis and Editing☆818Updated last year
- Official Pytorch implementation of StreamV2V.☆513Updated 8 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,237Updated 8 months ago