TIGER-AI-Lab / MMLU-ProLinks
The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]
☆309Updated 8 months ago
Alternatives and similar repositories for MMLU-Pro
Users that are interested in MMLU-Pro are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- ☆313Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆271Updated 3 weeks ago
- A simple unified framework for evaluating LLMs☆254Updated 7 months ago
- FuseAI Project☆584Updated 9 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆363Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆206Updated 5 months ago
- Reproducible, flexible LLM evaluations☆266Updated this week
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆425Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆445Updated last year
- Official repository for ORPO☆465Updated last year
- ☆326Updated 5 months ago
- Automatic evals for LLMs☆557Updated 4 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆523Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.☆656Updated 5 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆441Updated last year
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆268Updated last month
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆264Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆355Updated last year
- ☆320Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆110Updated 9 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆202Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆238Updated 8 months ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆321Updated 8 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆477Updated last year
- ☆198Updated 7 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆715Updated 4 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 4 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆274Updated 9 months ago