dangeng / motion_guidanceLinks
Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"
☆103Updated last year
Alternatives and similar repositories for motion_guidance
Users that are interested in motion_guidance are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆87Updated 11 months ago
- Directed Diffusion: Direct Control of Object Placement through Attention Guidance (AAAI2024)☆79Updated last year
- ☆101Updated 10 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆104Updated last year
- [NeurIPS 2024] Official Implementation of Attention Interpolation of Text-to-Image Diffusion☆103Updated 8 months ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆93Updated 9 months ago
- [AAAI-2025] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆93Updated last year
- Official implementation for "LOVECon: Text-driven Training-free Long Video Editing with ControlNet"☆41Updated last year
- [ NeurIPS 2024 D&B Track ] Implementation for "FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models"☆70Updated 7 months ago
- [SIGGRAPH Asia 2024] TrailBlazer: Trajectory Control for Diffusion-Based Video Generation☆100Updated last year
- CVPRW 2025 paper Progressive Autoregressive Video Diffusion Models: https://arxiv.org/abs/2410.08151☆79Updated 2 months ago
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆97Updated 8 months ago
- Pytorch Implementation of "SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation"(CVPR 2024)☆124Updated last year
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆68Updated this week
- Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation☆38Updated last year
- ☆28Updated 4 months ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆42Updated 4 months ago
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆87Updated last year
- 🏞️ Official implementation of "Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition"☆107Updated last year
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆136Updated 9 months ago
- [ICLR 2024] Code for FreeNoise based on LaVie☆34Updated last year
- ☆96Updated 3 months ago
- ☆64Updated 2 years ago
- MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance☆26Updated 7 months ago
- ☆64Updated last year
- we propose to generate a series of geometric shapes with target colors to disentangle (or peel off ) the target colors from the shapes. B…☆64Updated 9 months ago
- Subjects200K dataset☆114Updated 6 months ago
- This is an official repository for the paper, NoiseCollage, which is a revolutionary extension of text-to-image diffusion models for layo…☆57Updated last year
- ☆84Updated last year
- ☆67Updated last year