idavidrein / gpqaLinks
GPQA: A Graduate-Level Google-Proof Q&A Benchmark
☆436Updated last year
Alternatives and similar repositories for gpqa
Users that are interested in gpqa are comparing it to the libraries listed below
Sorting:
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆317Updated 2 weeks ago
- Automatic evals for LLMs☆564Updated 5 months ago
- A simple unified framework for evaluating LLMs☆255Updated 7 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆732Updated 4 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- ☆556Updated last year
- Reproducible, flexible LLM evaluations☆293Updated 3 weeks ago
- Evaluation of LLMs on latest math competitions☆200Updated last month
- A project to improve skills of large language models☆648Updated this week
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆234Updated 4 months ago
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- ☆200Updated 7 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆340Updated last month
- ☆474Updated last year
- ☆328Updated 6 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆268Updated last month
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆535Updated last year
- RewardBench: the first evaluation tool for reward models.☆663Updated 6 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆595Updated 4 months ago
- Open source interpretability artefacts for R1.☆164Updated 7 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆391Updated 3 weeks ago
- Code for Quiet-STaR☆743Updated last year
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated last year
- A benchmark that challenges language models to code solutions for scientific problems☆157Updated last week
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆627Updated 8 months ago
- ☆241Updated last year
- Repository for Zochi's Research☆294Updated 3 weeks ago
- Official repository for ORPO☆467Updated last year
- A curated collection of LLM reasoning and planning resources, including key papers, limitations, benchmarks, and additional learning mate…☆305Updated 9 months ago