MMMU-Benchmark / MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI"
☆378Updated this week
Alternatives and similar repositories for MMMU:
Users that are interested in MMMU are comparing it to the libraries listed below
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆278Updated 2 months ago
- RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness☆273Updated last month
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer☆348Updated this week
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆173Updated 4 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆324Updated this week
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)☆193Updated this week
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆265Updated 2 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆257Updated 4 months ago
- Long Context Transfer from Language to Vision☆356Updated last month
- Aligning LMMs with Factually Augmented RLHF☆339Updated last year
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆335Updated last week
- ☆304Updated 11 months ago
- HPT - Open Multimodal LLMs from HyperGAI☆313Updated 7 months ago
- MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts☆263Updated last month
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆251Updated 6 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆223Updated 3 weeks ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆338Updated last year
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆302Updated 2 months ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆210Updated 9 months ago
- ☆300Updated 3 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆266Updated 10 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆382Updated 9 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆120Updated 3 months ago
- ☆159Updated 6 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆306Updated 9 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆491Updated 7 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆215Updated 3 weeks ago
- ✨✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆442Updated last month
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆188Updated last week
- Official implementation of the Law of Vision Representation in MLLMs☆145Updated 2 months ago