om-ai-lab / VL-CheckListLinks
Evaluating Vision & Language Pretraining Models with Objects, Attributes and Relations. [EMNLP 2022]
☆130Updated 8 months ago
Alternatives and similar repositories for VL-CheckList
Users that are interested in VL-CheckList are comparing it to the libraries listed below
Sorting:
- FACTUAL benchmark dataset, the pre-trained textual scene graph parser trained on FACTUAL.☆111Updated this week
- [CVPR 2023] Official implementation of the paper: Fine-grained Audible Video Description☆73Updated last year
- ☆29Updated 6 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆142Updated 5 months ago
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆84Updated last month
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆109Updated 2 months ago
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?☆187Updated last year
- [AAAI 2023] Zero-Shot Enhancement of CLIP with Parameter-free Attention☆88Updated 2 years ago
- [NAACL 2025 Oral] 🎉 From redundancy to relevance: Enhancing explainability in multimodal large language models☆95Updated 3 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆93Updated last year
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆238Updated 6 months ago
- Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal Alignment, CVPR, 2024☆93Updated last month
- Learning Semantic Relationship among Instances for Image-Text Matching, CVPR, 2023☆90Updated last month
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- Offical PyTorch implementation of Clover: Towards A Unified Video-Language Alignment and Fusion Model (CVPR2023)☆40Updated 2 years ago
- ☆28Updated 2 months ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 2 years ago
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆151Updated last year
- Toolkit for Elevater Benchmark☆72Updated last year
- ☆27Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆87Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆137Updated 2 years ago
- 【AAAI'2023 & IJCV】Transferring Vision-Language Models for Visual Recognition: A Classifier Perspective☆192Updated last year
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated 2 years ago
- Official repository for the A-OKVQA dataset☆84Updated last year
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cache☆43Updated 10 months ago
- 【CVPR'2023】Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models☆151Updated 8 months ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- Official implementation of "Towards Efficient Visual Adaption via Structural Re-parameterization".☆183Updated last year
- NegCLIP.☆32Updated 2 years ago