TIGER-AI-Lab / MMLU-ProLinks
The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]
☆259Updated 4 months ago
Alternatives and similar repositories for MMLU-Pro
Users that are interested in MMLU-Pro are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆228Updated 8 months ago
- ☆310Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆357Updated 10 months ago
- FuseAI Project☆578Updated 5 months ago
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆370Updated 9 months ago
- A simple unified framework for evaluating LLMs☆225Updated 3 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆232Updated 10 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆503Updated 6 months ago
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆244Updated 2 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆427Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆307Updated 10 months ago
- Reproducible, flexible LLM evaluations☆219Updated this week
- ☆304Updated last month
- Automatic evals for LLMs☆467Updated 2 weeks ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆166Updated last week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆590Updated last week
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆223Updated 2 months ago
- ☆319Updated 9 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆253Updated last week
- Official repository for ORPO☆458Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"☆252Updated last year
- ☆181Updated 2 months ago
- ☆524Updated 7 months ago
- RewardBench: the first evaluation tool for reward models.☆612Updated last month
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆341Updated 9 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆189Updated 11 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆485Updated 10 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆138Updated 8 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆657Updated last year