TRI-ML / vlm-evaluationLinks
VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
☆132Updated last year
Alternatives and similar repositories for vlm-evaluation
Users that are interested in vlm-evaluation are comparing it to the libraries listed below
Sorting:
- A RLHF Infrastructure for Vision-Language Models☆187Updated last year
- ☆155Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆99Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆91Updated last year
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆316Updated 2 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆171Updated 2 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆145Updated last year
- ☆100Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆152Updated 2 months ago
- Matryoshka Multimodal Models☆120Updated 10 months ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆67Updated last year
- M-HalDetect Dataset Release☆26Updated 2 years ago
- ☆78Updated last year
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆87Updated last year
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆72Updated last year
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆220Updated last month
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆233Updated 3 months ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 9 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆316Updated 10 months ago
- (ACL 2025) MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆49Updated 6 months ago
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆286Updated 2 years ago
- ☆104Updated 11 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆200Updated last year
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆145Updated last year
- ☆76Updated last year
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆135Updated 2 years ago
- ☆356Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆235Updated 8 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆84Updated last month
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆277Updated last year