Lizw14 / Super-CLEVRLinks
Code for paper "Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning"
☆44Updated 2 years ago
Alternatives and similar repositories for Super-CLEVR
Users that are interested in Super-CLEVR are comparing it to the libraries listed below
Sorting:
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated last year
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆46Updated 6 months ago
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆29Updated 5 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆20Updated last year
- Program synthesis for 3D spatial reasoning☆53Updated 6 months ago
- ☆65Updated last month
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆33Updated last year
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆136Updated 2 years ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆152Updated 2 months ago
- ☆42Updated 3 months ago
- ☆111Updated 4 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆78Updated last year
- ☆79Updated 5 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆80Updated 6 months ago
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆65Updated 7 months ago
- Code for Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? [COLM 2024]☆24Updated last year
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆32Updated last year
- ☆30Updated last year
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆52Updated last year
- Source code for the Paper "Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models"☆17Updated last week
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆67Updated last year
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆28Updated last year
- Egocentric Video Understanding Dataset (EVUD)☆32Updated last year
- Code release for NeurIPS 2023 paper SlotDiffusion: Object-centric Learning with Diffusion Models☆94Updated last year
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Models☆33Updated last year
- ☆39Updated 3 months ago
- [ICLR 2025 Oral] Official Implementation for "Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Un…☆18Updated last year
- ☆95Updated 5 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆94Updated 9 months ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆37Updated last year