SWE-bench / sb-cliLinks
Run SWE-bench evaluations remotely
☆50Updated 5 months ago
Alternatives and similar repositories for sb-cli
Users that are interested in sb-cli are comparing it to the libraries listed below
Sorting:
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆57Updated 6 months ago
- ☆131Updated 8 months ago
- ☆130Updated 7 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆241Updated this week
- Harness used to benchmark aider against SWE Bench benchmarks☆78Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆128Updated last year
- ☆106Updated last year
- ☆32Updated last year
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆96Updated 8 months ago
- ☆59Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graph☆248Updated 9 months ago
- SWE Arena☆35Updated 6 months ago
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Updated 9 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆69Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 10 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆68Updated 2 weeks ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆532Updated this week
- Agent computer interface for AI software engineer.☆115Updated last month
- Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory☆243Updated 8 months ago
- accompanying material for sleep-time compute paper☆119Updated 8 months ago
- ☆41Updated last year
- ☆61Updated 7 months ago
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆30Updated 9 months ago
- ☆28Updated 2 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆415Updated last week
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆80Updated last year
- Training and Benchmarking LLMs for Code Preference.☆37Updated last year
- ☆216Updated this week
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆120Updated last month
- SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?☆249Updated 3 weeks ago