cambridgeltl / visual-spatial-reasoning
[TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.
☆104Updated last year
Related projects ⓘ
Alternatives and complementary repositories for visual-spatial-reasoning
- Official repository for the A-OKVQA dataset☆63Updated 6 months ago
- ☆63Updated 5 years ago
- ☆25Updated this week
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated last year
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆112Updated 2 years ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆71Updated 8 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆133Updated last year
- ☆121Updated last week
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- ☆68Updated last year
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆32Updated 2 months ago
- Official Code of IdealGPT☆32Updated last year
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆35Updated last year
- ☆32Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆84Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆34Updated 8 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆178Updated 7 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆16Updated 5 months ago
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆52Updated 5 months ago
- The SVO-Probes Dataset for Verb Understanding☆31Updated 2 years ago
- ☆64Updated 4 months ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆42Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆40Updated last year
- ☆55Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆107Updated 4 months ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆41Updated 4 months ago
- M-HalDetect Dataset Release☆19Updated last year
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image feat…☆42Updated 2 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆72Updated 7 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year