River-Zhang / Awesome-FLUX-DiT
A collection of diffusion models based on FLUX/DiT for image/video generation, editing, reconstruction, inpainting .etc.
☆22Updated this week
Alternatives and similar repositories for Awesome-FLUX-DiT:
Users that are interested in Awesome-FLUX-DiT are comparing it to the libraries listed below
- CAR: Controllable AutoRegressive Modeling for Visual Generation☆107Updated 3 months ago
- [ICLR 2025] Trajectory Attention For Fine-grained Video Motion Control☆52Updated 2 weeks ago
- ☆76Updated 9 months ago
- [Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller☆34Updated 2 weeks ago
- ☆26Updated 3 months ago
- Training-Free Condition-Guided Text-to-Video Generation☆62Updated last year
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆36Updated 2 months ago
- T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation☆65Updated this week
- This is the official implementation for ControlVAR.☆95Updated 2 months ago
- [ICLR 2024] Official PyTorch/Diffusers implementation of "Object-aware Inversion and Reassembly for Image Editing"☆84Updated 6 months ago
- Official Repo for Paper "AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea"☆55Updated last week
- The official repository for ECCV2024 paper "RegionDrag: Fast Region-Based Image Editing with Diffusion Models"☆45Updated 4 months ago
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆95Updated last year
- [CVPR 2024] Official PyTorch implementation of FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition☆139Updated last month
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆94Updated 11 months ago
- [ICLR 2025] ControlAR: Controllable Image Generation with Autoregressive Models☆197Updated last month
- [ICLR2025]☆137Updated last month
- ☆34Updated 4 months ago
- CVPR-24 | Official codebase for ZONE: Zero-shot InstructiON-guided Local Editing☆72Updated 3 months ago
- [CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models☆68Updated 5 months ago
- The official repository of DreamMover☆29Updated 5 months ago
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆59Updated 4 months ago
- Official implementation of ImprovingText-guided ObjectInpainting with SemanticPre-inpainting in ECCV 2024☆48Updated 2 months ago
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆86Updated 6 months ago
- MC$^2$: Multi-concept Guidance for Customized Multi-concept Generation☆23Updated 11 months ago
- CCEdit: Creative and Controllable Video Editing via Diffusion Models☆106Updated 8 months ago
- LeftRefill: Filling Right Canvas based on Left Reference through Generalized Text-to-Image Diffusion Model (CVPR2024)☆71Updated 7 months ago
- [ECCV2024] Source Prompt Disentangled Inversion for Boosting Image Editability with Diffusion Models☆41Updated 7 months ago
- Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.☆75Updated last year