amitakamath / whatsup_vlmsLinks
Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".
☆66Updated last year
Alternatives and similar repositories for whatsup_vlms
Users that are interested in whatsup_vlms are comparing it to the libraries listed below
Sorting:
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆61Updated 7 months ago
- [ICLR '25] Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations"☆92Updated this week
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆93Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆81Updated last month
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆89Updated last year
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆88Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆150Updated 2 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆34Updated last year
- ☆74Updated last year
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆133Updated 2 years ago
- Official repository for the A-OKVQA dataset☆104Updated last year
- Official pytorch implementation of "Interpreting the Second-Order Effects of Neurons in CLIP"☆42Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 5 months ago
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆43Updated 5 months ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆66Updated 2 months ago
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Models☆33Updated last year
- ☆108Updated 8 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆154Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆99Updated 3 months ago
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆41Updated last year
- FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models☆31Updated last week
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆33Updated last year
- NegCLIP.☆38Updated 2 years ago
- VisualGPTScore for visio-linguistic reasoning☆27Updated 2 years ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆71Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆38Updated last month
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆35Updated 2 years ago
- Awesome Vision-Language Compositionality, a comprehensive curation of research papers in literature.☆32Updated 9 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆98Updated last year