idavidrein / gpqaLinks
GPQA: A Graduate-Level Google-Proof Q&A Benchmark
☆367Updated 8 months ago
Alternatives and similar repositories for gpqa
Users that are interested in gpqa are comparing it to the libraries listed below
Sorting:
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆254Updated 4 months ago
- A simple unified framework for evaluating LLMs☆220Updated 2 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆226Updated 7 months ago
- ☆570Updated 2 months ago
- RewardBench: the first evaluation tool for reward models.☆604Updated 2 weeks ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆497Updated last year
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆566Updated last week
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆551Updated 3 months ago
- Automatic evals for LLMs☆437Updated 3 weeks ago
- A project to improve skills of large language models☆429Updated this week
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆276Updated last year
- ☆782Updated 2 months ago
- ☆332Updated 3 weeks ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆228Updated last year
- Reproducible, flexible LLM evaluations☆214Updated last month
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆357Updated 9 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆489Updated last month
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆251Updated 3 weeks ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆245Updated 7 months ago
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆620Updated last week
- Open source interpretability artefacts for R1.☆149Updated 2 months ago
- ☆940Updated 5 months ago
- ☆181Updated 2 months ago
- A banchmark list for evaluation of large language models.☆128Updated 2 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆304Updated last year
- Code and Data for Tau-Bench☆624Updated 5 months ago
- ☆520Updated 7 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆317Updated 7 months ago
- Repository for Zochi's Research☆221Updated 3 weeks ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆336Updated 9 months ago