G-U-N / AnimateLCMLinks
[SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data
☆644Updated 7 months ago
Alternatives and similar repositories for AnimateLCM
Users that are interested in AnimateLCM are comparing it to the libraries listed below
Sorting:
- ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment☆1,208Updated 11 months ago
- ☆436Updated last year
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆511Updated last year
- Official Pytorch implementation of "Visual Style Prompting with Swapping Self-Attention"☆455Updated 11 months ago
- ☆793Updated 7 months ago
- Official code for "Style Aligned Image Generation via Shared Attention"☆1,285Updated last year
- [ECCV 2024] HiDiffusion: Increases the resolution and speed of your diffusion model by only adding a single line of code!☆824Updated 6 months ago
- Official Code for MotionCtrl [SIGGRAPH 2024]☆1,436Updated 4 months ago
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]☆593Updated 7 months ago
- ☆727Updated last year
- [ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models☆689Updated 11 months ago
- ☆404Updated last year
- ☆423Updated 9 months ago
- [CVPR 2024] X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model☆765Updated 10 months ago
- Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video☆547Updated last year
- ☆454Updated last year
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆748Updated 6 months ago
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆493Updated 11 months ago
- [ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction☆939Updated 7 months ago
- MotionDirector Training For AnimateDiff. Train a MotionLoRA and run it on any compatible AnimateDiff UI.☆301Updated 10 months ago
- Concept Sliders for Precise Control of Diffusion Models☆1,069Updated last month
- [NeurIPS 2024] Boosting the performance of consistency models with PCM!☆478Updated 6 months ago
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆351Updated last year
- Put Your Face Everywhere in Seconds.☆312Updated last year
- ✨ Hotshot-XL: State-of-the-art AI text-to-GIF model trained to work alongside Stable Diffusion XL☆1,102Updated last year
- [ICLR 2025] Official implementation of MotionClone: Training-Free Motion Cloning for Controllable Video Generation☆493Updated 5 months ago
- Official implementation of Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model (ICLR …☆443Updated 4 months ago
- Training-free Regional Prompting for Diffusion Transformers 🔥☆654Updated 6 months ago
- InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation 🔥☆1,932Updated 9 months ago
- InstantID-ROME: Improved Identity-Preserving Generation in Seconds 🔥☆227Updated last year