TIGER-AI-Lab / MMLU-ProLinks
The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]
☆327Updated 2 months ago
Alternatives and similar repositories for MMLU-Pro
Users that are interested in MMLU-Pro are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year
- A simple unified framework for evaluating LLMs☆258Updated 9 months ago
- ☆313Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆254Updated last year
- ☆328Updated 7 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆273Updated 3 months ago
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆461Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year
- Reproducible, flexible LLM evaluations☆327Updated 2 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆770Updated 6 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆533Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆445Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 6 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆218Updated 7 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆276Updated 3 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆454Updated last year
- ☆320Updated last year
- Automatic evals for LLMs☆575Updated last month
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆370Updated last year
- FuseAI Project☆586Updated last year
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆273Updated last year
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆259Updated 8 months ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆473Updated 3 weeks ago
- ☆202Updated 9 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆385Updated 6 months ago
- RewardBench: the first evaluation tool for reward models.☆683Updated last week
- ☆321Updated last year
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆362Updated 2 months ago
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆61Updated 11 months ago
- Official repository for ORPO☆469Updated last year