UCSC-VLAA / Complex-EditLinks
Complex-Edit: CoT-Like Instruction Generation for Complexity-Controllable Image Editing Benchmark
☆26Updated 7 months ago
Alternatives and similar repositories for Complex-Edit
Users that are interested in Complex-Edit are comparing it to the libraries listed below
Sorting:
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆70Updated 4 months ago
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.☆50Updated last year
- Video Diffusion Transformers are In-Context Learners☆34Updated 10 months ago
- Official implementation of MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis☆86Updated last year
- Official code for CustAny: Customizing Anything from A Single Example. Accepted by CVPR2025 (Oral)☆48Updated 7 months ago
- Benchmark dataset and code of MSRVTT-Personalization☆51Updated 2 weeks ago
- Vico: Compositional Video Generation as Flow Equalization☆58Updated last year
- 🏞️ Official implementation of "Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition"☆109Updated this week
- CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method☆27Updated last month
- [ICLR 2025] HQ-Edit: A High-Quality and High-Coverage Dataset for General Image Editing☆111Updated last year
- LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models☆66Updated last year
- [CVPR 2025] PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation☆42Updated 4 months ago
- Reflect-DiT: Inference-Time Scaling for Text-to-Image Diffusion Transformers via In-Context Reflection☆50Updated 3 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆108Updated 2 months ago
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- ☆31Updated last year
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆69Updated last year
- ☆24Updated 2 years ago
- [ NeurIPS 2024 D&B Track ] Implementation for "FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models"☆73Updated 11 months ago
- [ICCV 2023 Oral, Best Paper Finalist] ITI-GEN: Inclusive Text-to-Image Generation☆69Updated last year
- ☆32Updated 8 months ago
- An innovative method designed to augment the capabilities of existing video diffusion models☆22Updated last year
- [arXiv 2024] I4VGen: Image as Free Stepping Stone for Text-to-Video Generation☆24Updated last year
- Official source codes of "TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation" (ICLR 2025)☆56Updated 10 months ago
- Subjects200K dataset☆123Updated 10 months ago
- [NeurIPS 2025] Official code for Inference-Time Scaling for Flow Models via Stochastic Generation and Rollover Budget Forcing☆69Updated last month
- Code for Paper 'Redefining Temporal Modeling in Video Diffusion: The Vectorized Timestep Approach'☆34Updated last year
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆107Updated last year
- Official implementation of the paper "MotionCrafter: One-Shot Motion Customization of Diffusion Models"☆28Updated last year
- ☆51Updated 11 months ago