haofengl / DragNoiseLinks
[CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation
☆87Updated last year
Alternatives and similar repositories for DragNoise
Users that are interested in DragNoise are comparing it to the libraries listed below
Sorting:
- Pytorch Implementation of "SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation"(CVPR 2024)☆124Updated last year
- ☆101Updated 10 months ago
- This repo contains the code for PreciseControl project [ECCV'24]☆64Updated 9 months ago
- [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆129Updated 2 weeks ago
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆119Updated last month
- ☆103Updated last year
- Subjects200K dataset☆114Updated 6 months ago
- [AAAI-2025] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆93Updated last year
- [CVPR 2024] Official PyTorch implementation of FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition☆161Updated 6 months ago
- [Arxiv 2024] Edicho: Consistent Image Editing in the Wild☆118Updated 6 months ago
- Official code of "Edit Transfer: Learning Image Editing via Vision In-Context Relations"☆80Updated last month
- MuDI: Identity Decoupling for Multi-Subject Personalization of Text-to-Image Models (NeurIPS 2024)☆95Updated 6 months ago
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆87Updated 11 months ago
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆65Updated 3 weeks ago
- [Arxiv'25] BlobCtrl: A Unified and Flexible Framework for Element-level Image Generation and Editing☆11Updated 4 months ago
- ☆83Updated 3 weeks ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆93Updated 9 months ago
- [ECCV 2024] AnyControl, a multi-control image synthesis model that supports any combination of user provided control signals. 一个支持用户自由输入控…☆127Updated last year
- CVPR-24 | Official codebase for ZONE: Zero-shot InstructiON-guided Local Editing☆78Updated 8 months ago
- ☆29Updated last year
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆103Updated last year
- InstantStyle-Plus: Style Transfer with Content-Preserving in Text-to-Image Generation 🔥☆126Updated last year
- The official implementation of the paper titled "StableV2V: Stablizing Shape Consistency in Video-to-Video Editing".☆158Updated 7 months ago
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆172Updated 10 months ago
- we propose to generate a series of geometric shapes with target colors to disentangle (or peel off ) the target colors from the shapes. B…☆64Updated 9 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆136Updated 9 months ago
- ☆89Updated 10 months ago
- ☆50Updated 7 months ago
- MAG-Edit: Localized Image Editing in Complex Scenarios via Mask-Based Attention-Adjusted Guidance (ACM MM2024)☆133Updated 3 months ago
- [ECCV2024] Source Prompt Disentangled Inversion for Boosting Image Editability with Diffusion Models☆44Updated last year