Forked version of AnimateDiff, attempts to add init images. If you are look into original repo, please go to https://github.com/guoyww/animatediff/
☆154Jul 14, 2023Updated 2 years ago
Alternatives and similar repositories for AnimateDiff
Users that are interested in AnimateDiff are comparing it to the libraries listed below
Sorting:
- AnimateDiff I2V version.☆185Mar 1, 2024Updated 2 years ago
- AnimationDiff with train☆126Feb 26, 2024Updated 2 years ago
- a CLI utility/library for AnimateDiff stable diffusion generation☆269Feb 23, 2026Updated last week
- animatediff prompt travel☆1,203Jan 13, 2024Updated 2 years ago
- ☆31Jan 7, 2024Updated 2 years ago
- A fork of the Official implementation of AnimateDiff.☆29Aug 8, 2023Updated 2 years ago
- Official implementation of AnimateDiff.☆12,038Jul 31, 2024Updated last year
- Finetune ModelScope's Text To Video model using Diffusers 🧨☆695Dec 14, 2023Updated 2 years ago
- ☆82Apr 10, 2023Updated 2 years ago
- MotionDirector Training For AnimateDiff. Train a MotionLoRA and run it on any compatible AnimateDiff UI.☆308Aug 20, 2024Updated last year
- Official Code for MotionCtrl [SIGGRAPH 2024]☆1,494Feb 19, 2025Updated last year
- Generate images from an initial frame and text☆37Jul 28, 2023Updated 2 years ago
- Stylizing Video by Example (Jamriska et al., 2019)☆50Jan 27, 2024Updated 2 years ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆307Oct 19, 2025Updated 4 months ago
- [ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video Diffusion Models.☆1,052Aug 21, 2024Updated last year
- Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR …☆1,707Feb 3, 2025Updated last year
- Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllability☆953Nov 11, 2023Updated 2 years ago
- Diffusers pipeline for inpainting with any available finetune☆36Jul 8, 2023Updated 2 years ago
- ✨ Hotshot-XL: State-of-the-art AI text-to-GIF model trained to work alongside Stable Diffusion XL☆1,113Jan 23, 2024Updated 2 years ago
- ☆17Jul 30, 2024Updated last year
- FlexiFilm: Long Video Generation with Flexible Conditions☆31May 1, 2024Updated last year
- Text-Guided Generation of Full-Body Image with Preserved Reference Face for Customized Animation☆24Jun 24, 2024Updated last year
- Official Implementation for "ConceptLab: Creative Generation using Diffusion Prior Constraints"☆255Dec 19, 2023Updated 2 years ago
- Temporal Coherence tools. Automatic1111 extension.☆148Apr 15, 2023Updated 2 years ago
- The first open-domain closed-loop revisited benchmark for evaluating memory consistency and action control in world models.☆41Feb 10, 2026Updated 3 weeks ago
- Retrieval-Augmented Video Generation for Telling a Story☆259Feb 5, 2024Updated 2 years ago
- Create butter-smooth transitions between prompts, powered by stable diffusion☆366Mar 29, 2024Updated last year
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆359Dec 8, 2023Updated 2 years ago
- Jupyter notebooks for PuLID face transfer with Flux.1 dev. Able to run on Google Colab Free Tier☆18Dec 18, 2024Updated last year
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models☆669Jul 17, 2024Updated last year
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆427Aug 25, 2025Updated 6 months ago
- Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts☆322Aug 1, 2023Updated 2 years ago
- ☆724Feb 9, 2024Updated 2 years ago
- ControlAnimate Library☆48Nov 25, 2023Updated 2 years ago
- ☆159Jan 15, 2023Updated 3 years ago
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆6,480Jun 28, 2024Updated last year
- ☆20Jun 26, 2024Updated last year
- Ablating Concepts in Text-to-Image Diffusion Models (ICCV 2023)☆167Dec 21, 2024Updated last year
- [ICLR 2024 Spotlight] Official implementation of ScaleCrafter for higher-resolution visual generation at inference time.☆510Mar 7, 2024Updated last year