SWE-bench / sb-cliLinks
Run SWE-bench evaluations remotely
☆42Updated last month
Alternatives and similar repositories for sb-cli
Users that are interested in sb-cli are comparing it to the libraries listed below
Sorting:
- ☆111Updated 3 months ago
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆55Updated last month
- RepoQA: Evaluating Long-Context Code Understanding☆117Updated 10 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆62Updated 9 months ago
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆15Updated 5 months ago
- ☆116Updated 4 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 11 months ago
- ☆99Updated last year
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆212Updated this week
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- Harness used to benchmark aider against SWE Bench benchmarks☆76Updated last year
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆47Updated last week
- ☆38Updated 2 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆93Updated 4 months ago
- ☆67Updated 9 months ago
- ☆52Updated last year
- Training and Benchmarking LLMs for Code Preference.☆35Updated 10 months ago
- ☆56Updated 2 months ago
- ☆55Updated 7 months ago
- Source code for paper: INTERVENOR : Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing☆26Updated 9 months ago
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆46Updated 8 months ago
- ☆99Updated last year
- Scaling Data for SWE-agents☆399Updated last week
- Small, simple agent task environments for training and evaluation☆18Updated 10 months ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- ☆30Updated last year
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆315Updated this week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆182Updated 6 months ago
- ☆28Updated 2 weeks ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆56Updated this week