wutong16 / FiVALinks
[ NeurIPS 2024 D&B Track ] Implementation for "FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models"
☆70Updated 8 months ago
Alternatives and similar repositories for FiVA
Users that are interested in FiVA are comparing it to the libraries listed below
Sorting:
- [CVPR'25 - Rating 555] Official PyTorch implementation of Lumos: Learning Visual Generative Priors without Text☆52Updated 5 months ago
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆106Updated last year
- ☆33Updated 10 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆106Updated last week
- 🏞️ Official implementation of "Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition"☆108Updated last year
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆94Updated last year
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆68Updated 2 months ago
- [arXiv 2024] I4VGen: Image as Free Stepping Stone for Text-to-Video Generation☆24Updated 11 months ago
- ☆29Updated 5 months ago
- [Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller☆46Updated last month
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆71Updated last month
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- ☆26Updated 6 months ago
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆105Updated last year
- CVPRW 2025 paper Progressive Autoregressive Video Diffusion Models: https://arxiv.org/abs/2410.08151☆83Updated 4 months ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆44Updated 5 months ago
- Official source codes of "TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation" (ICLR 2025)☆58Updated 7 months ago
- [CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models☆75Updated last year
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆87Updated last year
- [ICML 2025] EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM☆65Updated last month
- ☆46Updated 11 months ago
- Reflect-DiT: Inference-Time Scaling for Text-to-Image Diffusion Transformers via In-Context Reflection☆45Updated 3 weeks ago
- Implementation of paper EditCLIP: Representation Learning for Image Editing (ICCV 2025)☆26Updated 2 months ago
- [AAAI'25] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆95Updated last year
- Official implemention of "Make It Count: Text-to-Image Generation with an Accurate Number of Objects" (CVPR 2025)☆83Updated 6 months ago
- ☆30Updated last year
- ShotBench: Expert-Level Cinematic Understanding in Vision-Language Models☆75Updated last month
- [Arxiv 2025] ByteMorph: Benchmarking Instruction-Guided Image Editing with Non-Rigid Motions☆37Updated 3 months ago
- ☆45Updated 4 months ago
- [NeurIPS 2024] Official Implementation of Attention Interpolation of Text-to-Image Diffusion☆105Updated 9 months ago