Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.
☆260Mar 29, 2026Updated last month
Alternatives and similar repositories for experiments
Users that are interested in experiments are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆105Jul 17, 2024Updated last year
- Run SWE-bench evaluations remotely☆62Aug 14, 2025Updated 8 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆4,783Apr 1, 2026Updated last month
- Harness used to benchmark aider against SWE Bench benchmarks☆81Jun 27, 2024Updated last year
- Agentless🐱: an agentless approach to automatically solve software development problems☆2,038Dec 22, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆97Mar 26, 2025Updated last year
- ☆30Jan 8, 2025Updated last year
- ☆637Sep 1, 2025Updated 8 months ago
- Enhanced fork of SWE-bench, tailored for OpenDevin's ecosystem.☆30May 26, 2024Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graph☆273Apr 1, 2025Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆671Jul 29, 2025Updated 9 months ago
- ☆159Aug 27, 2024Updated last year
- Open-source repository for the OOPSLA'24 paper "CYCLE: Learning to Self-Refine Code Generation"☆10Mar 8, 2024Updated 2 years ago
- ☆12Jan 31, 2024Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆13Mar 5, 2025Updated last year
- ☆134Jun 6, 2025Updated 10 months ago
- Agent computer interface for AI software engineer.☆127Apr 16, 2026Updated 2 weeks ago
- Artifact for TOSEM Submission: GiantRepair☆13Jun 26, 2024Updated last year
- CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context☆19Feb 20, 2026Updated 2 months ago
- Commit0: Library Generation from Scratch☆190Feb 24, 2026Updated 2 months ago
- ☆28Nov 10, 2025Updated 5 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆15Sep 4, 2024Updated last year
- ☆59Jan 28, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Codev-Bench (Code Development Benchmark), a fine-grained, real-world, repository-level, and developer-centric evaluation framework. Codev…☆49Nov 6, 2024Updated last year
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆632Apr 20, 2026Updated last week
- ☆25Jun 10, 2025Updated 10 months ago
- A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 37.3% tasks (pass@1) in SWE-be…☆3,070Apr 24, 2025Updated last year
- This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software E…☆1,441Jul 18, 2025Updated 9 months ago
- A multi-programming language benchmark for LLMs☆302Apr 12, 2026Updated 2 weeks ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆485Updated this week
- Inference code of Lingma SWE-GPT☆258Dec 2, 2024Updated last year
- TDD-Bench-Verified is a new benchmark for generating test cases for test-driven development (TDD)☆29Updated this week
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Dec 22, 2023Updated 2 years ago
- SWE-PolyBench: A multi-language benchmark for repository level evaluation of coding agents☆83Updated this week
- [LREC-Coling 2024] PECC: Problem Extraction and Coding Challenges☆14May 30, 2024Updated last year
- ☆137May 8, 2025Updated 11 months ago
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Apr 9, 2025Updated last year
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆76Mar 23, 2026Updated last month
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆204Aug 16, 2024Updated last year