TRI-ML / vlm-evaluationLinks
VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
☆120Updated 10 months ago
Alternatives and similar repositories for vlm-evaluation
Users that are interested in vlm-evaluation are comparing it to the libraries listed below
Sorting:
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆86Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆91Updated last year
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆128Updated last year
- Official implementation of the Law of Vision Representation in MLLMs☆163Updated 8 months ago
- Matryoshka Multimodal Models☆112Updated 6 months ago
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆282Updated 2 years ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆295Updated 8 months ago
- A RLHF Infrastructure for Vision-Language Models☆179Updated 8 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆210Updated this week
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆82Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆56Updated last year
- ☆152Updated 9 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆189Updated 10 months ago
- ☆61Updated 9 months ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆81Updated 5 months ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆88Updated 9 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆87Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆69Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆216Updated last year
- ☆100Updated last year
- M-HalDetect Dataset Release☆25Updated last year
- ☆85Updated 7 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆147Updated last year
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆144Updated 11 months ago
- ☆344Updated last year
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆70Updated 9 months ago
- 🔥 [ICLR 2025] Official Benchmark Toolkits for "Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark"☆29Updated 5 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆133Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆77Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆307Updated 6 months ago