lmarena / p2lLinks
Prompt-to-Leaderboard
☆271Updated 8 months ago
Alternatives and similar repositories for p2l
Users that are interested in p2l are comparing it to the libraries listed below
Sorting:
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆566Updated 8 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆418Updated last week
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆532Updated this week
- AWM: Agent Workflow Memory☆387Updated last month
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆327Updated 2 months ago
- Official repository for DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research☆524Updated last week
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆667Updated 10 months ago
- ☆517Updated last month
- Resources for our paper: "Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training"☆166Updated 3 months ago
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆499Updated 7 months ago
- [ICLR 2026] Learning to Reason without External Rewards☆388Updated this week
- Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory☆246Updated 8 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆469Updated 8 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆255Updated 8 months ago
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆362Updated 2 months ago
- The evaluation benchmark on MCP servers☆236Updated 4 months ago
- Beating the GAIA benchmark with Transformers Agents. 🚀☆145Updated 11 months ago
- DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents☆552Updated 2 months ago
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)☆191Updated last week
- [NeurIPS 2025] Atom of Thoughts for Markov LLM Test-Time Scaling☆638Updated 2 months ago
- Scaling RL on advanced reasoning models☆661Updated 3 months ago
- ☆320Updated last year
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆431Updated 9 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆624Updated 6 months ago
- ☆328Updated 6 months ago
- Harbor is a framework for running agent evaluations and creating and using RL environments.☆488Updated this week
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆273Updated 3 months ago
- A simple unified framework for evaluating LLMs☆261Updated 9 months ago
- Multi-Faceted AI Agent and Workflow Autotuning. Automatically optimizes LangChain, LangGraph, DSPy programs for better quality, lower exe…☆267Updated 8 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆358Updated 7 months ago