aorwall / moatless-testbedsLinks
Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git patches and run tests or SWE-Bench evaluations.
☆14Updated 9 months ago
Alternatives and similar repositories for moatless-testbeds
Users that are interested in moatless-testbeds are comparing it to the libraries listed below
Sorting:
- ☆11Updated last year
- ☆130Updated 8 months ago
- ☆28Updated 2 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆65Updated last year
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆57Updated 5 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Updated 10 months ago
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆46Updated 5 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- Computer Agent Arena: Test & compare AI agents in real desktop apps & web environments. Code/data coming soon!☆51Updated 9 months ago
- ☆128Updated 7 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆128Updated last year
- Multi-Granularity LLM Debugger [ICSE2026]☆94Updated 6 months ago
- Verifiers for LLM Reinforcement Learning☆79Updated 8 months ago
- ☆63Updated 6 months ago
- 🔔🧠 Easily experiment with popular language agents across diverse reasoning/decision-making benchmarks!☆53Updated 6 months ago
- ☆67Updated 9 months ago
- ☆32Updated 2 weeks ago
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆30Updated 9 months ago
- ☆99Updated 5 months ago
- Run SWE-bench evaluations remotely☆49Updated 4 months ago
- Small, simple agent task environments for training and evaluation☆19Updated last year
- ☆17Updated 9 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆74Updated last year
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated last year
- ☆88Updated 2 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆95Updated 7 months ago
- Training and Benchmarking LLMs for Code Preference.☆37Updated last year
- ☆41Updated last year
- SWE Arena☆35Updated 6 months ago