MyNiuuu / MOFA-Video
[ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.
☆740Updated 5 months ago
Alternatives and similar repositories for MOFA-Video:
Users that are interested in MOFA-Video are comparing it to the libraries listed below
- 📹 A more flexible framework that can generate videos at any resolution and creates videos from images.☆957Updated this week
- [ICLR 2025] Official implementation of MotionClone: Training-Free Motion Cloning for Controllable Video Generation☆483Updated 4 months ago
- [Siggraph Asia 2024] Follow-Your-Emoji: This repo is the official implementation of "Follow-Your-Emoji: Fine-Controllable and Expressive …☆390Updated 2 weeks ago
- ☆420Updated 7 months ago
- [ICML 2024] MagicPose(also known as MagicDance): Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion☆752Updated 10 months ago
- StoryMaker: Towards consistent characters in text-to-image generation☆689Updated 5 months ago
- MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion☆215Updated 2 weeks ago
- NeurIPS 2024☆380Updated 7 months ago
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]☆586Updated 6 months ago
- [ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models☆686Updated 10 months ago
- ☆517Updated 3 months ago
- Official Pytorch implementation of StreamV2V.☆488Updated 2 months ago
- Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆412Updated last week
- ☆374Updated 11 months ago
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRA☆1,566Updated 7 months ago
- ☆681Updated 5 months ago
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆489Updated 10 months ago
- Official implementation of FIFO-Diffusion: Generating Infinite Videos from Text without Training (NeurIPS 2024)☆456Updated 6 months ago
- AutoStudio: Crafting Consistent Subjects in Multi-turn Interactive Image Generation☆439Updated 3 weeks ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆366Updated 3 months ago
- ☆536Updated 2 weeks ago
- Code for [CVPR 2024] VideoSwap: Customized Video Subject Swapping with Interactive Semantic Point Correspondence☆389Updated 5 months ago
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆492Updated last week
- [ICLR 2025] Animate-X: Universal Character Image Animation with Enhanced Motion Representation☆285Updated 2 months ago
- [ICLR 2025] Animate-X - PyTorch Implementation☆303Updated 3 months ago
- [CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation☆757Updated 11 months ago
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆512Updated last year
- SCEPTER is an open-source framework used for training, fine-tuning, and inference with generative models.☆513Updated last month
- Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model☆238Updated 9 months ago
- Official codes of VEnhancer: Generative Space-Time Enhancement for Video Generation☆534Updated 7 months ago