om-ai-lab / VL-CheckList
Evaluating Vision & Language Pretraining Models with Objects, Attributes and Relations. [EMNLP 2022]
☆129Updated 6 months ago
Alternatives and similar repositories for VL-CheckList:
Users that are interested in VL-CheckList are comparing it to the libraries listed below
- FACTUAL benchmark dataset, the pre-trained textual scene graph parser trained on FACTUAL.☆105Updated last week
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆137Updated 4 months ago
- [CVPR 2023] Official implementation of the paper: Fine-grained Audible Video Description☆72Updated last year
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆107Updated 3 weeks ago
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆80Updated 2 weeks ago
- [NAACL 2025 Oral] 🎉 From redundancy to relevance: Enhancing explainability in multimodal large language models☆94Updated 2 months ago
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?☆186Updated 10 months ago
- [AAAI 2023] Zero-Shot Enhancement of CLIP with Parameter-free Attention☆88Updated last year
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆93Updated last year
- ☆28Updated 5 months ago
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆236Updated 4 months ago
- ☆144Updated 5 months ago
- ☆66Updated 6 years ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆136Updated 2 years ago
- ☆28Updated last month
- Learning Semantic Relationship among Instances for Image-Text Matching, CVPR, 2023☆88Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated last year
- (ICML 2024) Improve Context Understanding in Multimodal Large Language Models via Multimodal Composition Learning☆28Updated 6 months ago
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆152Updated last year
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆205Updated 2 years ago
- Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal Alignment, CVPR, 2024☆89Updated 10 months ago
- SVIT: Scaling up Visual Instruction Tuning☆163Updated 9 months ago
- Dataset pruning for ImageNet and LAION-2B.☆77Updated 9 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆46Updated last year
- ☆63Updated last year
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Model☆131Updated last week
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cache☆42Updated 8 months ago
- ☆91Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated last year