zyxElsa / MotionCrafter
Official implementation of the paper "MotionCrafter: One-Shot Motion Customization of Diffusion Models"
☆25Updated 10 months ago
Related projects ⓘ
Alternatives and complementary repositories for MotionCrafter
- ☆30Updated last year
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆51Updated 3 weeks ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆110Updated last month
- ☆23Updated 6 months ago
- Directed Diffusion: Direct Control of Object Placement through Attention Guidance (AAAI2024)☆76Updated 8 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆89Updated 3 months ago
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆84Updated 7 months ago
- ☆40Updated 11 months ago
- ☆23Updated last year
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆91Updated 9 months ago
- [CVPR2024] The official implementation of paper Relation Rectification in Diffusion Model☆44Updated 2 months ago
- Official Repo for Paper "OmniEdit: Building Image Editing Generalist Models Through Specialist Supervision"☆31Updated last week
- we propose to generate a series of geometric shapes with target colors to disentangle (or peel off ) the target colors from the shapes. B…☆50Updated last month
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.☆40Updated last month
- EVA: Zero-shot Accurate Attributes and Multi-Object Video Editing☆27Updated 7 months ago
- This is the official implementation for DragVideo☆43Updated last month
- Interactive Video Generation via Masked-Diffusion☆70Updated 7 months ago
- Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.☆73Updated last year
- Official implementation for "LOVECon: Text-driven Training-free Long Video Editing with ControlNet"☆37Updated last year
- Official source codes of "TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation"☆25Updated last month
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆78Updated 7 months ago
- Official pytorch implementation for SingleInsert☆26Updated 7 months ago
- Official PyTorch implementation of "λ-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent Space"☆44Updated 7 months ago
- ☆60Updated last year
- Official implemention of "Make It Count: Text-to-Image Generation with an Accurate Number of Objects"☆61Updated 5 months ago
- [ECCV'24] MaxFusion: Plug & Play multimodal generation in text to image diffusion models☆18Updated 2 weeks ago
- code for paper "Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models"☆39Updated last year
- [CVPR 2024] InitNO: Boosting Text-to-Image Diffusion Models via Initial Noise Optimization☆31Updated 5 months ago
- Streaming Video Diffusion: Online Video Editing with Diffusion Models☆16Updated 5 months ago
- Official PyTorch Implementation for Shape-Guided Diffusion with Inside-Outside Attention, WACV 2024☆37Updated last year