TIGER-AI-Lab / MMLU-ProLinks
The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]
☆321Updated last month
Alternatives and similar repositories for MMLU-Pro
Users that are interested in MMLU-Pro are comparing it to the libraries listed below
Sorting:
- A simple unified framework for evaluating LLMs☆259Updated 8 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year
- ☆313Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆452Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆533Updated 11 months ago
- RewardBench: the first evaluation tool for reward models.☆674Updated 6 months ago
- ☆320Updated last year
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆271Updated 2 months ago
- Reproducible, flexible LLM evaluations☆312Updated last month
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆368Updated last year
- ☆329Updated 7 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆753Updated 5 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆453Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆216Updated 7 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆271Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆276Updated 2 months ago
- ☆201Updated 8 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆447Updated last year
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆249Updated 8 months ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆466Updated this week
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 6 months ago
- Automatic evals for LLMs☆570Updated 2 weeks ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆323Updated last year
- A project to improve skills of large language models☆734Updated this week
- FuseAI Project☆585Updated 11 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆204Updated last year
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆58Updated 11 months ago
- ☆318Updated last year