sam-motamed / Video-Editing-X-Attention
☆14Updated 10 months ago
Alternatives and similar repositories for Video-Editing-X-Attention:
Users that are interested in Video-Editing-X-Attention are comparing it to the libraries listed below
- ArXiv paper Progressive Autoregressive Video Diffusion Models: https://arxiv.org/abs/2410.08151☆61Updated 4 months ago
- The official repository of "Spectral Motion Alignment for Video Motion Transfer using Diffusion Models".☆24Updated 2 months ago
- Maximize the Resolution Potential of Pre-trained Rectified Flow Transformers☆49Updated 4 months ago
- LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models☆51Updated 6 months ago
- Official implementation for "LOVECon: Text-driven Training-free Long Video Editing with ControlNet"☆39Updated last year
- Official implementation of "Divide & Bind Your Attention for Improved Generative Semantic Nursing" (BMVC 2023 Oral)☆35Updated last year
- Official repository for "VideoGuide: Improving Video Diffusion Models without Training Through a Teacher's Guide"☆21Updated 2 weeks ago
- [CVPR 2024] InitNO: Boosting Text-to-Image Diffusion Models via Initial Noise Optimization☆46Updated 8 months ago
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆86Updated 5 months ago
- Official implementation of "A Noise is Worth Diffusion Guidance", code and weights will be available soon.☆38Updated 2 months ago
- [ECCV2024] Source Prompt Disentangled Inversion for Boosting Image Editability with Diffusion Models☆41Updated 7 months ago
- ☆23Updated last year
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆95Updated last year
- Streaming Video Diffusion: Online Video Editing with Diffusion Models☆16Updated 8 months ago
- Directed Diffusion: Direct Control of Object Placement through Attention Guidance (AAAI2024)☆77Updated 11 months ago
- ☆30Updated last year
- Official implementation of the paper "MotionCrafter: One-Shot Motion Customization of Diffusion Models"☆26Updated last year
- ☆81Updated 4 months ago
- we propose to generate a series of geometric shapes with target colors to disentangle (or peel off ) the target colors from the shapes. B…☆55Updated 4 months ago
- ☆16Updated last year
- The official implementation for Detector Guidance for Multi-Object Text-to-Image Generation (DG)☆18Updated last year
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆58Updated 3 months ago
- ☆10Updated 4 months ago
- Eye-for-an-eye: Appearance Transfer with Semantic Correspondence in Diffusion Models☆25Updated 5 months ago
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆93Updated 10 months ago
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆35Updated 2 months ago
- [arXiv 2024] I4VGen: Image as Free Stepping Stone for Text-to-Video Generation☆21Updated 4 months ago
- Official source codes of "TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation" (ICLR 2025)☆33Updated 3 weeks ago
- This is an official repository for the paper, NoiseCollage, which is a revolutionary extension of text-to-image diffusion models for layo…☆49Updated 9 months ago