TIGER-AI-Lab / MMLU-ProLinks
The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]
☆281Updated 6 months ago
Alternatives and similar repositories for MMLU-Pro
Users that are interested in MMLU-Pro are comparing it to the libraries listed below
Sorting:
- A simple unified framework for evaluating LLMs☆241Updated 4 months ago
- Reproducible, flexible LLM evaluations☆238Updated last month
- Benchmarking LLMs with Challenging Tasks from Real Users☆238Updated 9 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆360Updated 11 months ago
- FuseAI Project☆579Updated 7 months ago
- ☆311Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆244Updated 9 months ago
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆405Updated 10 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆344Updated 11 months ago
- ☆320Updated 11 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆633Updated last month
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆435Updated 10 months ago
- ☆313Updated 2 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆254Updated 3 months ago
- Automatic evals for LLMs☆522Updated 2 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆241Updated last year
- Prompt-to-Leaderboard☆250Updated 3 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆171Updated last month
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆471Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆309Updated 11 months ago
- ☆187Updated 4 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆258Updated last month
- Simple extension on vLLM to help you speed up reasoning model without training.☆181Updated 2 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆347Updated last month
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆507Updated 7 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆245Updated 4 months ago
- RewardBench: the first evaluation tool for reward models.☆628Updated 2 months ago
- Official repository for ORPO☆463Updated last year
- Scaling Data for SWE-agents☆378Updated this week
- ☆636Updated this week