google-deepmind / questbenchLinks
☆22Updated last month
Alternatives and similar repositories for questbench
Users that are interested in questbench are comparing it to the libraries listed below
Sorting:
- Reinforcing General Reasoning without Verifiers☆60Updated last week
- ☆17Updated 3 months ago
- ☆47Updated 3 weeks ago
- ☆115Updated 4 months ago
- ☆13Updated 10 months ago
- ☆53Updated last week
- A testbed for agents and environments that can automatically improve models through data generation.☆24Updated 3 months ago
- Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models☆58Updated 4 months ago
- ☆33Updated 4 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆57Updated 6 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆95Updated 2 weeks ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 5 months ago
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Updated 4 months ago
- ☆32Updated last month
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆33Updated 8 months ago
- AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories☆18Updated last month
- ☆97Updated 11 months ago
- ☆36Updated 2 weeks ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆59Updated 4 months ago
- ☆43Updated 2 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆42Updated this week
- Process Reward Models That Think☆41Updated 3 weeks ago
- ☆32Updated 5 months ago
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆32Updated 2 months ago
- Repository for Skill Set Optimization☆13Updated 10 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆93Updated 2 weeks ago
- The official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆35Updated last week