MMMU-Benchmark / MMMULinks
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI"
☆536Updated 8 months ago
Alternatives and similar repositories for MMMU
Users that are interested in MMMU are comparing it to the libraries listed below
Sorting:
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆281Updated 7 months ago
- MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts☆352Updated 3 months ago
- Aligning LMMs with Factually Augmented RLHF☆388Updated 2 years ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆320Updated last year
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆441Updated 8 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆358Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆302Updated last year
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆320Updated 3 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆621Updated 10 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆358Updated 2 years ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆359Updated last month
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆317Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024 Best Paper]☆237Updated 2 weeks ago
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆276Updated 5 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆134Updated 2 months ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆545Updated last week
- Long Context Transfer from Language to Vision☆398Updated 10 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆280Updated last year
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆555Updated last year
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆116Updated last year
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆212Updated 3 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆138Updated 8 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆684Updated 2 years ago
- ☆218Updated last year
- Research Trends in LLM-guided Multimodal Learning.☆357Updated 2 years ago
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMs☆410Updated last month
- A RLHF Infrastructure for Vision-Language Models☆193Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆294Updated last year
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆153Updated 2 weeks ago
- ☆358Updated last year