TRI-ML / vlm-evaluation
VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
☆97Updated 4 months ago
Alternatives and similar repositories for vlm-evaluation:
Users that are interested in vlm-evaluation are comparing it to the libraries listed below
- ☆134Updated 2 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆145Updated 2 months ago
- ☆67Updated 6 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆202Updated 3 weeks ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆265Updated 2 months ago
- Visualizing the attention of vision-language models☆95Updated 2 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆112Updated 6 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆105Updated last year
- A RLHF Infrastructure for Vision-Language Models☆145Updated 2 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆78Updated 8 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆181Updated 3 weeks ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆193Updated 9 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆62Updated last month
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆160Updated 3 months ago
- ☆304Updated 11 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆77Updated 9 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆77Updated 11 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆251Updated 6 months ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆132Updated 4 months ago
- Matryoshka Multimodal Models☆90Updated last month
- ☆94Updated last year
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆74Updated 11 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆173Updated 4 months ago
- When do we not need larger vision models?☆354Updated last month
- SVIT: Scaling up Visual Instruction Tuning☆164Updated 6 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆62Updated 7 months ago
- A collection of visual instruction tuning datasets.☆76Updated 10 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆229Updated 3 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)☆193Updated this week
- ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆96Updated 6 months ago