GPQA: A Graduate-Level Google-Proof Q&A Benchmark
☆480Sep 30, 2024Updated last year
Alternatives and similar repositories for gpqa
Users that are interested in gpqa are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A benchmark that challenges language models to code solutions for scientific problems☆180Mar 16, 2026Updated last week
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,569May 28, 2023Updated 2 years ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆352Mar 18, 2026Updated last week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆823Jul 16, 2025Updated 8 months ago
- ☆4,406Jul 31, 2025Updated 7 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Arena-Hard-Auto: An automatic LLM benchmark.☆1,008Jun 21, 2025Updated 9 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆4,527Mar 19, 2026Updated last week
- ☆13Jul 2, 2025Updated 8 months ago
- A framework for few-shot evaluation of language models.☆11,802Mar 18, 2026Updated last week
- ☆771Jun 13, 2024Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆256Nov 10, 2024Updated last year
- The MATH Dataset (NeurIPS 2021)☆1,328Sep 6, 2025Updated 6 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆551Jun 25, 2024Updated last year
- [AAAI 2025] Augmenting Math Word Problems via Iterative Question Composing (https://arxiv.org/abs/2401.09003)☆23Oct 2, 2025Updated 5 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆1,108Updated this week
- [NeurIPS'24 LanGame workshop] On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆42Jul 7, 2025Updated 8 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Mar 6, 2025Updated last year
- ☆184Apr 30, 2025Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.☆705Feb 16, 2026Updated last month
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,106Jun 1, 2023Updated 2 years ago
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,172Jan 17, 2025Updated last year
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆894Jan 16, 2025Updated last year
- [ACL 2024 Findings] MathBench: A Comprehensive Multi-Level Difficulty Mathematics Evaluation Dataset☆112May 22, 2025Updated 10 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- ☆1,113Jan 10, 2026Updated 2 months ago
- SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning (NeurIPS D&B Track 2024)☆86Feb 25, 2024Updated 2 years ago
- [COLM 2025] Official code for "When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoni…☆15Oct 31, 2025Updated 4 months ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,718Mar 20, 2026Updated last week
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆367Sep 6, 2024Updated last year
- Learning to route instances for Human vs AI Feedback (ACL Main '25)☆27Jul 23, 2025Updated 8 months ago
- Benchmarking Benchmark Leakage in Large Language Models☆60May 20, 2024Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Feb 29, 2024Updated 2 years ago
- ☆342Jun 5, 2025Updated 9 months ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Aug 5, 2025Updated 7 months ago
- ☆81Mar 11, 2025Updated last year
- ☆30Dec 27, 2024Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,751Nov 15, 2025Updated 4 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆86Aug 10, 2024Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,961Aug 9, 2025Updated 7 months ago
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,700Oct 2, 2025Updated 5 months ago