chenllliang / MMEvalPro
[NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs
☆23Updated 4 months ago
Alternatives and similar repositories for MMEvalPro:
Users that are interested in MMEvalPro are comparing it to the libraries listed below
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated last month
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆45Updated 3 months ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆29Updated 7 months ago
- The code and data for the paper JiuZhang3.0☆40Updated 8 months ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆26Updated 7 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆49Updated 4 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆27Updated 7 months ago
- Code for Findings of EMNLP2023 paper "Coarse-to-Fine Dual Encoders are Better Frame Identification Learners"☆12Updated last year
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆45Updated 7 months ago
- [ICLR 2025] SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights☆53Updated last week
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆32Updated 8 months ago
- ☆15Updated 6 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Updated last year
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆41Updated last week
- Large Language Models Can Self-Improve in Long-context Reasoning☆62Updated 2 months ago
- ☆58Updated 5 months ago
- [NeurIPS2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆92Updated 2 months ago
- ☆20Updated 7 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated 11 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆16Updated last month
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆55Updated last month
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆38Updated 2 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆77Updated 4 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆71Updated 3 weeks ago
- Open-Pandora: On-the-fly Control Video Generation☆32Updated 2 months ago