aorwall / moatless-testbedsLinks
Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git patches and run tests or SWE-Bench evaluations.
☆14Updated 3 months ago
Alternatives and similar repositories for moatless-testbeds
Users that are interested in moatless-testbeds are comparing it to the libraries listed below
Sorting:
- ☆99Updated last month
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆50Updated 2 weeks ago
- ☆61Updated last week
- ☆28Updated 2 weeks ago
- ☆11Updated 9 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆113Updated 9 months ago
- Computer Agent Arena: Test & compare AI agents in real desktop apps & web environments. Code/data coming soon!☆47Updated 3 months ago
- ☆27Updated 6 months ago
- ☆66Updated 4 months ago
- ☆108Updated 2 months ago
- r2e: turn any github repository into a programming agent environment☆129Updated 3 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆67Updated 11 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆103Updated last week
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆24Updated 4 months ago
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆136Updated 3 weeks ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆91Updated 2 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆105Updated 2 months ago
- Training and Benchmarking LLMs for Code Preference.☆34Updated 8 months ago
- ☆118Updated 5 months ago
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆47Updated 6 months ago
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 7 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆59Updated 7 months ago
- Code for paper "Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System"☆59Updated 8 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆102Updated 4 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆75Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 10 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆151Updated 9 months ago
- 🔔🧠 Easily experiment with popular language agents across diverse reasoning/decision-making benchmarks!☆52Updated 3 weeks ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 6 months ago