McGill-NLP / AURORA
Code and data for the paper: Learning Action and Reasoning-Centric Image Editing from Videos and Simulation
☆24Updated 2 months ago
Alternatives and similar repositories for AURORA:
Users that are interested in AURORA are comparing it to the libraries listed below
- Official implementation of the paper The Hidden Language of Diffusion Models☆72Updated last year
- Code and Data for Paper: SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data☆33Updated last year
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆56Updated last year
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆44Updated last year
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆35Updated 7 months ago
- Official repo for the TMLR paper "Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners"☆28Updated 10 months ago
- ☆23Updated 5 months ago
- ☆31Updated last year
- ☆48Updated last year
- A curated list of papers and resources for text-to-image evaluation.☆28Updated last year
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆44Updated 2 months ago
- Visual Instruction-guided Explainable Metric. Code for "Towards Explainable Metrics for Conditional Image Synthesis Evaluation" (ACL 2024…☆34Updated 4 months ago
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆25Updated 6 months ago
- Official Repository of Personalized Visual Instruct Tuning☆28Updated 2 weeks ago
- ☆21Updated 9 months ago
- ☆49Updated last month
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.☆47Updated 5 months ago
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated 7 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆115Updated 8 months ago
- Training code for CLIP-FlanT5☆26Updated 7 months ago
- ☆63Updated last month
- ☆57Updated 11 months ago
- ☆47Updated last year
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆18Updated 8 months ago
- 👆Pytorch implementation of "Ctrl-V: Higher Fidelity Video Generation with Bounding-Box Controlled Object Motion"☆25Updated 5 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆33Updated last week
- A big_vision inspired repo that implements a generic Auto-Encoder class capable in representation learning and generative modeling.☆34Updated 8 months ago
- ORES: Open-vocabulary Responsible Visual Synthesis☆13Updated last year
- Codebase for the paper-Elucidating the design space of language models for image generation☆45Updated 4 months ago
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆18Updated 2 months ago