TIGER-AI-Lab / MMLU-ProLinks
The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]
☆292Updated 6 months ago
Alternatives and similar repositories for MMLU-Pro
Users that are interested in MMLU-Pro are comparing it to the libraries listed below
Sorting:
- A simple unified framework for evaluating LLMs☆245Updated 5 months ago
- ☆311Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆241Updated 10 months ago
- Reproducible, flexible LLM evaluations☆248Updated 2 months ago
- Automatic evals for LLMs☆526Updated 2 months ago
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆408Updated 11 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆362Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆245Updated 10 months ago
- ☆315Updated 3 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆189Updated 3 months ago
- ☆320Updated last year
- ☆190Updated 5 months ago
- FuseAI Project☆581Updated 7 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆244Updated 4 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆261Updated 4 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆249Updated last year
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆659Updated 2 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆347Updated 11 months ago
- ☆138Updated 6 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆431Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 2 months ago
- RewardBench: the first evaluation tool for reward models.☆634Updated 3 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆171Updated 2 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆196Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆438Updated 11 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆354Updated 2 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆256Updated last year
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Mem…☆381Updated last year
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆246Updated last month
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆244Updated 5 months ago