idavidrein / gpqaView external linksLinks
GPQA: A Graduate-Level Google-Proof Q&A Benchmark
☆466Sep 30, 2024Updated last year
Alternatives and similar repositories for gpqa
Users that are interested in gpqa are comparing it to the libraries listed below
Sorting:
- A benchmark that challenges language models to code solutions for scientific problems☆171Updated this week
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,550May 28, 2023Updated 2 years ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆335Nov 22, 2025Updated 2 months ago
- ☆4,346Jul 31, 2025Updated 6 months ago
- Arena-Hard-Auto: An automatic LLM benchmark.☆994Jun 21, 2025Updated 7 months ago
- [AAAI 2025] Augmenting Math Word Problems via Iterative Question Composing (https://arxiv.org/abs/2401.09003)☆23Oct 2, 2025Updated 4 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆85Aug 10, 2024Updated last year
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆796Jul 16, 2025Updated 6 months ago
- The official evaluation suite and dynamic data release for MixEval.☆255Nov 10, 2024Updated last year
- ☆772Jun 13, 2024Updated last year
- [ACL 2024 Findings] MathBench: A Comprehensive Multi-Level Difficulty Mathematics Evaluation Dataset☆111May 22, 2025Updated 8 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆4,267Feb 3, 2026Updated last week
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Feb 29, 2024Updated last year
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Mar 6, 2025Updated 11 months ago
- RewardBench: the first evaluation tool for reward models.☆687Jan 31, 2026Updated last week
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆548Jun 25, 2024Updated last year
- The MATH Dataset (NeurIPS 2021)☆1,301Sep 6, 2025Updated 5 months ago
- Humanity's Last Exam☆1,352Oct 7, 2025Updated 4 months ago
- ☆342Jun 5, 2025Updated 8 months ago
- A framework for few-shot evaluation of language models.☆11,393Updated this week
- ☆182Apr 30, 2025Updated 9 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,092Jun 1, 2023Updated 2 years ago
- Download, parse, and filter data from Phil Papers. Data-ready for The-Pile.☆19Aug 28, 2023Updated 2 years ago
- PostTrainBench measures how well CLI agents like Claude Code or Codex CLI can post-train base LLMs on a single H100 GPU in 10 hours☆138Updated this week
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,667Updated this week
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆880Jan 16, 2025Updated last year
- SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning (NeurIPS D&B Track 2024)☆86Feb 25, 2024Updated last year
- Evaluation of LLMs on latest math competitions☆216Feb 5, 2026Updated last week
- ☆1,088Jan 10, 2026Updated last month
- Benchmarking Benchmark Leakage in Large Language Models☆58May 20, 2024Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆421Oct 25, 2025Updated 3 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆1,032Feb 6, 2026Updated last week
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,127Jan 17, 2025Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆184May 20, 2025Updated 8 months ago
- Memory experiments with LLMs☆11Mar 31, 2023Updated 2 years ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,167Aug 17, 2024Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Aug 5, 2025Updated 6 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,946Aug 9, 2025Updated 6 months ago
- ☆130Jul 8, 2024Updated last year