mertyg / vision-language-models-are-bows
Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR 2023
☆268Updated last year
Alternatives and similar repositories for vision-language-models-are-bows:
Users that are interested in vision-language-models-are-bows are comparing it to the libraries listed below
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆199Updated 11 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆242Updated 4 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆271Updated 11 months ago
- ☆310Updated last year
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆77Updated last year
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆193Updated 3 months ago
- SVIT: Scaling up Visual Instruction Tuning☆164Updated 8 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆270Updated 3 months ago
- Densely Captioned Images (DCI) dataset repository.☆169Updated 8 months ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆267Updated last year
- ☆139Updated 4 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆142Updated 10 months ago
- ☆164Updated last year
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆204Updated 2 years ago
- Official implementation of the Law of Vision Representation in MLLMs☆150Updated 3 months ago
- Official repository for the A-OKVQA dataset☆75Updated 9 months ago
- A collection of visual instruction tuning datasets.☆76Updated 11 months ago
- ☆183Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated last year
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆148Updated 11 months ago
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning☆103Updated 5 months ago
- Visualizing the attention of vision-language models☆119Updated this week
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆98Updated last year
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆145Updated 2 years ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆95Updated last week
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆78Updated 11 months ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆157Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆79Updated 10 months ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆70Updated 4 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆81Updated last year