TRI-ML / vlm-evaluationLinks
VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
☆126Updated last year
Alternatives and similar repositories for vlm-evaluation
Users that are interested in vlm-evaluation are comparing it to the libraries listed below
Sorting:
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆88Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆92Updated last year
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆285Updated 2 years ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆297Updated 10 months ago
- A RLHF Infrastructure for Vision-Language Models☆183Updated 10 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆130Updated last year
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆211Updated last week
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆84Updated last year
- Official implementation of the Law of Vision Representation in MLLMs☆166Updated 10 months ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆61Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆222Updated last month
- ☆153Updated 10 months ago
- ☆68Updated 10 months ago
- Matryoshka Multimodal Models☆112Updated 7 months ago
- Visualizing the attention of vision-language models☆231Updated 6 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆227Updated 5 months ago
- 🔥 [ICLR 2025] Official Benchmark Toolkits for "Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark"☆30Updated 7 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆139Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆91Updated 11 months ago
- ☆100Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆93Updated last month
- ☆352Updated last year
- ☆88Updated 8 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆194Updated 11 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆70Updated 11 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆84Updated 7 months ago
- ☆77Updated last year
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆144Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆308Updated 8 months ago