McGill-NLP / AURORALinks
Code and data for the paper: Learning Action and Reasoning-Centric Image Editing from Videos and Simulation
☆33Updated 6 months ago
Alternatives and similar repositories for AURORA
Users that are interested in AURORA are comparing it to the libraries listed below
Sorting:
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated 2 years ago
- T2I-Copilot: A Training-Free Multi-Agent Text-to-Image System for Enhanced Prompt Interpretation and Interactive Generation (ICCV'25)☆39Updated 3 months ago
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.☆50Updated last year
- Visual Instruction-guided Explainable Metric. Code for "Towards Explainable Metrics for Conditional Image Synthesis Evaluation" (ACL 2024…☆60Updated last year
- ☆58Updated last year
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆57Updated 2 years ago
- ☆80Updated 6 months ago
- ☆52Updated 2 years ago
- Official repo for StableLLAVA☆95Updated 2 years ago
- Code and Data for Paper: SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data☆35Updated last year
- ☆26Updated last year
- Training code for CLIP-FlanT5☆30Updated last year
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆79Updated last year
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- A curated list of papers and resources for text-to-image evaluation.☆30Updated 2 years ago
- ☆41Updated last year
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated last year
- Code for "VideoRepair: Improving Text-to-Video Generation via Misalignment Evaluation and Localized Refinement"☆51Updated last year
- Official repository for LLaVA-Reward (ICCV 2025): Multimodal LLMs as Customized Reward Models for Text-to-Image Generation☆22Updated 5 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 9 months ago
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆74Updated last year
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆42Updated 7 months ago
- Codebase for the paper-Elucidating the design space of language models for image generation☆46Updated last year
- Code for Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? [COLM 2024]☆24Updated last year
- Official implementation of the paper The Hidden Language of Diffusion Models☆77Updated last year
- ECCV2024_Parrot Captions Teach CLIP to Spot Text☆66Updated last year
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆47Updated 10 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆153Updated 3 months ago
- Official implementation of Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents (NeurIPS 2025)☆44Updated last month
- [CVPR 2024] Official PyTorch implementation of "ECLIPSE: Revisiting the Text-to-Image Prior for Efficient Image Generation"☆65Updated last year