om-ai-lab / VL-CheckListLinks
Evaluating Vision & Language Pretraining Models with Objects, Attributes and Relations. [EMNLP 2022]
☆131Updated 8 months ago
Alternatives and similar repositories for VL-CheckList
Users that are interested in VL-CheckList are comparing it to the libraries listed below
Sorting:
- [ACL 2023 Findings] FACTUAL dataset, the textual scene graph parser trained on FACTUAL.☆112Updated this week
- [CVPR 2023] Official implementation of the paper: Fine-grained Audible Video Description☆73Updated last year
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆144Updated 6 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆110Updated 3 months ago
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆84Updated 2 months ago
- ☆29Updated 7 months ago
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆238Updated 6 months ago
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?☆187Updated last year
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆93Updated last year
- [NAACL 2025 Oral] 🎉 From redundancy to relevance: Enhancing explainability in multimodal large language models