lmarena / p2lLinks
Prompt-to-Leaderboard
☆254Updated 4 months ago
Alternatives and similar repositories for p2l
Users that are interested in p2l are comparing it to the libraries listed below
Sorting:
- Scaling Data for SWE-agents☆399Updated this week
- AWM: Agent Workflow Memory☆316Updated 7 months ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆287Updated 6 months ago
- Atom of Thoughts for Markov LLM Test-Time Scaling☆585Updated 2 months ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆547Updated 4 months ago
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆455Updated 3 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆595Updated 5 months ago
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆379Updated 4 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆443Updated 3 months ago
- DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents☆380Updated last month
- [ICML2025] Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction☆356Updated 6 months ago
- Resources for our paper: "Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training"☆159Updated 3 months ago
- ☆292Updated 3 months ago
- Scaling RL on advanced reasoning models☆583Updated last month
- 🦀️ CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents. https://crab.camel-ai.org/☆372Updated 2 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆353Updated 2 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆339Updated 2 months ago
- ☆116Updated 4 months ago
- ☆800Updated 2 weeks ago
- A MemAgent framework that can be extrapolated to 3.5M, along with a training framework for RL training of any agent workflow.☆665Updated last month
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 7 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆243Updated 4 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆309Updated last week
- MiroThinker is open-source agentic models trained for deep research and complex tool use scenarios.☆305Updated this week
- Gödel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement☆130Updated 7 months ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆404Updated this week
- ☆159Updated last year
- The evaluation benchmark on MCP servers☆205Updated last week
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)☆185Updated 2 weeks ago
- A simple unified framework for evaluating LLMs☆243Updated 5 months ago