zliucz / animate-your-wordLinks
[ICCV'25 Best Paper Candidate] Official Implementations for Paper: Dynamic Typography: Bringing Text to Life via Video Diffusion Prior
☆342Updated last month
Alternatives and similar repositories for animate-your-word
Users that are interested in animate-your-word are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Official implementation of MotionClone: Training-Free Motion Cloning for Controllable Video Generation☆512Updated 6 months ago
- NeurIPS 2024☆393Updated last year
- Code for [CVPR 2024] VideoSwap: Customized Video Subject Swapping with Interactive Semantic Point Correspondence☆401Updated last year
- Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model☆240Updated 7 months ago
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]☆640Updated last year
- [ICLR'25] MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequences☆318Updated last year
- [ICCV 2025] Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆487Updated last month
- Official code for the CVPR 2025 paper "SemanticDraw: Towards Real-Time Interactive Content Creation from Image Diffusion Models."☆583Updated 6 months ago
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆538Updated last year
- [CVPR'25] Official Implementations for Paper - AniDoc: Animation Creation Made Easier☆560Updated 8 months ago
- Official implementation of FIFO-Diffusion: Generating Infinite Videos from Text without Training (NeurIPS 2024)☆480Updated last year
- ☆385Updated last year
- [AAAI2025] DesignEdit: Unify Spatial-Aware Image Editing via Training-free Inpainting with a Multi-Layered Latent Diffusion Framework☆362Updated last year
- ☆620Updated last year
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆503Updated last year
- Let's finetune video generation models!☆528Updated 3 months ago
- ☆268Updated last year
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆352Updated 2 years ago
- Official implementation of OneDiffusion paper (CVPR 2025)☆658Updated last year
- SCEPTER is an open-source framework used for training, fine-tuning, and inference with generative models.☆547Updated 8 months ago
- Official implementation of CVPR 2024 paper: "FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Con…☆473Updated last year
- [Siggraph Asia 2024 & IJCV 2025] Follow-Your-Emoji: This repo is the official implementation of "Follow-Your-Emoji: Fine-Controllable and…☆430Updated last month
- ☆290Updated last year
- [CVPR 2025] Consistent and Controllable Image Animation with Motion Diffusion Models☆293Updated 7 months ago
- ☆466Updated last year
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆185Updated last year
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆759Updated last year
- ☆277Updated last year
- CSGO: Content-Style Composition in Text-to-Image Generation 🔥☆381Updated last year
- ☆360Updated last year