aorwall / moatless-testbedsLinks
Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git patches and run tests or SWE-Bench evaluations.
☆14Updated 8 months ago
Alternatives and similar repositories for moatless-testbeds
Users that are interested in moatless-testbeds are comparing it to the libraries listed below
Sorting:
- ☆11Updated last year
- ☆128Updated 7 months ago
- 🔔🧠 Easily experiment with popular language agents across diverse reasoning/decision-making benchmarks!☆54Updated 5 months ago
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆56Updated 4 months ago
- ☆28Updated 3 weeks ago
- ☆67Updated 8 months ago
- Computer Agent Arena: Test & compare AI agents in real desktop apps & web environments. Code/data coming soon!☆51Updated 8 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆125Updated last year
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆30Updated 8 months ago
- ☆126Updated 6 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆66Updated last year
- Small, simple agent task environments for training and evaluation☆19Updated last year
- ☆41Updated last year
- Agentless Lite: RAG-based SWE-Bench software engineering scaffold☆43Updated 7 months ago
- ☆41Updated 5 months ago
- Run SWE-bench evaluations remotely☆44Updated 3 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆73Updated last year
- ☆29Updated last week
- Multi-Granularity LLM Debugger [ICSE2026]☆93Updated 5 months ago
- ☆84Updated last month
- Aioli: A unified optimization framework for language model data mixing☆31Updated 10 months ago
- ☆17Updated 8 months ago
- ☆62Updated 5 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆98Updated 4 months ago
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆34Updated 7 months ago
- The Tool Decathlon: Benchmarking Language Agents for Diverse, Realistic, and Long-Horizon Task Execution☆154Updated this week
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆45Updated 3 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆94Updated 6 months ago