Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.
☆247Updated this week
Alternatives and similar repositories for experiments
Users that are interested in experiments are comparing it to the libraries listed below
Sorting:
- ☆104Jul 17, 2024Updated last year
- Run SWE-bench evaluations remotely☆58Aug 14, 2025Updated 6 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆4,337Feb 19, 2026Updated last week
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆97Mar 26, 2025Updated 11 months ago
- Harness used to benchmark aider against SWE Bench benchmarks☆79Jun 27, 2024Updated last year
- Agentless🐱: an agentless approach to automatically solve software development problems☆2,010Dec 22, 2024Updated last year
- ☆629Sep 1, 2025Updated 6 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆632Jul 29, 2025Updated 7 months ago
- Agentless Lite: RAG-based SWE-Bench software engineering scaffold☆45Apr 15, 2025Updated 10 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆252Apr 1, 2025Updated 11 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆15Sep 4, 2024Updated last year
- Agent computer interface for AI software engineer.☆115Dec 8, 2025Updated 2 months ago
- ☆132Jun 6, 2025Updated 8 months ago
- ☆11Jan 3, 2024Updated 2 years ago
- Artifact for TOSEM Submission: GiantRepair☆12Jun 26, 2024Updated last year
- Contains the model patches and the eval logs from the passing swe-bench-lite run.☆10Jun 28, 2024Updated last year
- ☆12Mar 5, 2025Updated 11 months ago
- Open-source repository for the OOPSLA'24 paper "CYCLE: Learning to Self-Refine Code Generation"☆10Mar 8, 2024Updated last year
- ☆46Jan 17, 2026Updated last month
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆71Jan 15, 2026Updated last month
- Commit0: Library Generation from Scratch☆186Updated this week
- ☆28Nov 10, 2025Updated 3 months ago
- Landing page + leaderboard for SWE-Bench benchmark☆11Updated this week
- ☆132May 8, 2025Updated 9 months ago
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Apr 9, 2025Updated 10 months ago
- Codev-Bench (Code Development Benchmark), a fine-grained, real-world, repository-level, and developer-centric evaluation framework. Codev…☆50Nov 6, 2024Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Dec 22, 2023Updated 2 years ago
- ☆60Jan 28, 2025Updated last year
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆443Updated this week
- This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software E…☆1,439Jul 18, 2025Updated 7 months ago
- A multi-programming language benchmark for LLMs☆298Jan 28, 2026Updated last month
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,688Oct 2, 2025Updated 4 months ago
- ☆18Apr 15, 2024Updated last year
- A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 37.3% tasks (pass@1) in SWE-be…☆3,054Apr 24, 2025Updated 10 months ago
- ☆20Nov 4, 2025Updated 3 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆189Aug 16, 2024Updated last year
- The 100 line AI agent that solves GitHub issues or helps you in your command line. Radically simple, no huge configs, no giant monorepo—b…☆3,003Updated this week
- ☆11Jul 29, 2022Updated 3 years ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆803Jul 16, 2025Updated 7 months ago