Lizw14 / Super-CLEVRLinks
Code for paper "Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning"
☆46Updated 2 years ago
Alternatives and similar repositories for Super-CLEVR
Users that are interested in Super-CLEVR are comparing it to the libraries listed below
Sorting:
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated 11 months ago
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆29Updated 2 months ago
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆45Updated 3 months ago
- ☆74Updated 9 months ago
- Code release for NeurIPS 2023 paper SlotDiffusion: Object-centric Learning with Diffusion Models☆92Updated last year
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆129Updated 2 years ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated last year
- [ICCV 2025] Official code for Perspective-Aware Reasoning in Vision-Language Models via Mental Imagery Simulation☆39Updated last week
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆70Updated 3 months ago
- Source code for the Paper "Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models"☆16Updated last month
- Program synthesis for 3D spatial reasoning☆49Updated 3 months ago
- Egocentric Video Understanding Dataset (EVUD)☆31Updated last year
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆52Updated last year
- [CVPR 2023] Code for "3D Concept Learning and Reasoning from Multi-View Images"☆80Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆61Updated last year
- ☆30Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆140Updated last year
- ☆45Updated last year
- ☆31Updated 2 weeks ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆77Updated last month
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆104Updated last year
- ☆37Updated 2 weeks ago
- ☆84Updated 2 months ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆44Updated 5 months ago
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Models☆31Updated 10 months ago
- ☆75Updated 3 months ago
- Code for Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? [COLM 2024]☆22Updated last year
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆37Updated 7 months ago
- ☆45Updated 8 months ago
- [ICLR 2025 Oral] Official Implementation for "Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Un…☆16Updated 11 months ago