TIGER-AI-Lab / MMLU-ProLinks
The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]
☆295Updated 7 months ago
Alternatives and similar repositories for MMLU-Pro
Users that are interested in MMLU-Pro are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆241Updated 11 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆361Updated last year
- Reproducible, flexible LLM evaluations☆252Updated 3 months ago
- The official evaluation suite and dynamic data release for MixEval.☆250Updated 11 months ago
- A simple unified framework for evaluating LLMs☆250Updated 5 months ago
- ☆312Updated last year
- FuseAI Project☆583Updated 8 months ago
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆413Updated last year
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆675Updated 2 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆348Updated last year
- ☆319Updated 4 months ago
- Automatic evals for LLMs☆539Updated 3 months ago
- ☆320Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆435Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆312Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆260Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 3 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆265Updated 5 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆439Updated 11 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆476Updated last year
- A project to improve skills of large language models☆571Updated this week
- Official repository for ORPO☆463Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆518Updated 8 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆360Updated 3 months ago
- ☆194Updated 5 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆258Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆173Updated 3 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆219Updated 2 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆196Updated 4 months ago
- RewardBench: the first evaluation tool for reward models.☆639Updated 4 months ago