FreedomIntelligence / MLLM-BenchLinks
MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria
☆72Updated last year
Alternatives and similar repositories for MLLM-Bench
Users that are interested in MLLM-Bench are comparing it to the libraries listed below
Sorting:
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆91Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆97Updated last year
- ☆100Updated last year
- ☆66Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆116Updated 11 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 9 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 5 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆59Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆276Updated last year
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆115Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆167Updated last year
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆148Updated 11 months ago
- ☆50Updated 2 years ago
- A RLHF Infrastructure for Vision-Language Models☆186Updated 11 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- Preference Learning for LLaVA☆54Updated last year
- ☆98Updated 10 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆142Updated last year
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆34Updated last year
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆120Updated 5 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆88Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆38Updated 2 weeks ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 8 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆312Updated 9 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆168Updated last month
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆58Updated last year
- A collection of visual instruction tuning datasets.☆76Updated last year