McGill-NLP / AURORA
Code and data for the paper: Learning Action and Reasoning-Centric Image Editing from Videos and Simulation
☆28Updated 3 months ago
Alternatives and similar repositories for AURORA:
Users that are interested in AURORA are comparing it to the libraries listed below
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆44Updated last year
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆56Updated last year
- ☆48Updated last year
- ☆22Updated 10 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆55Updated 2 months ago
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.☆47Updated 6 months ago
- Official Repository of Personalized Visual Instruct Tuning☆28Updated 2 months ago
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆25Updated 7 months ago
- A curated list of papers and resources for text-to-image evaluation.☆29Updated last year
- ☆57Updated last year
- ☆31Updated 3 months ago
- ☆13Updated 8 months ago
- Code and Data for Paper: SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data☆34Updated last year
- 👆Pytorch implementation of "Ctrl-V: Higher Fidelity Video Generation with Bounding-Box Controlled Object Motion"☆26Updated 6 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 10 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆43Updated 3 months ago
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆35Updated 8 months ago
- Codebase for the paper-Elucidating the design space of language models for image generation☆45Updated 5 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆124Updated 10 months ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆75Updated 4 months ago
- PhysGame Benchmark for Physical Commonsense Evaluation in Gameplay Videos☆43Updated 3 months ago
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆79Updated last year
- HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆57Updated 2 months ago
- 🦾 EvalGIM (pronounced as "EvalGym") is an evaluation library for generative image models. It enables easy-to-use, reproducible automatic…☆73Updated 4 months ago
- VEGGIE: Instructional Editing and Reasoning Video Concepts with Grounded Generation☆21Updated last month
- Code for "VideoRepair: Improving Text-to-Video Generation via Misalignment Evaluation and Localized Refinement"☆46Updated 5 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆69Updated 2 months ago
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆18Updated 10 months ago
- ☆30Updated 9 months ago
- Language Repository for Long Video Understanding☆31Updated 10 months ago