Aider-AI / aider-swe-benchLinks
Harness used to benchmark aider against SWE Bench benchmarks
☆76Updated last year
Alternatives and similar repositories for aider-swe-bench
Users that are interested in aider-swe-bench are comparing it to the libraries listed below
Sorting:
- ☆120Updated 4 months ago
- Aider's refactoring benchmark exercises based on popular python repos☆77Updated last year
- ☆160Updated last year
- 🔔🧠 Easily experiment with popular language agents across diverse reasoning/decision-making benchmarks!☆54Updated 3 months ago
- ☆99Updated last year
- Agent computer interface for AI software engineer.☆110Updated last month
- Cognition's results and methodology on SWE-bench☆120Updated last year
- ☆121Updated 5 months ago
- ☆58Updated 4 months ago
- Run SWE-bench evaluations remotely☆41Updated 2 months ago
- ☆58Updated 8 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆339Updated last week
- Enhancing AI Software Engineering with Repository-level Code Graph☆217Updated 6 months ago
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆133Updated 6 months ago
- Source code for paper: INTERVENOR : Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing☆26Updated 11 months ago
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆66Updated last year
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆75Updated last year
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆90Updated 3 weeks ago
- A library for benchmarking the Long Term Memory and Continual learning capabilities of LLM based agents. With all the tests and code you…☆79Updated 10 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆47Updated last month
- ☆101Updated last year
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆218Updated this week
- Coding problems used in aider's polyglot benchmark☆184Updated 10 months ago
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆55Updated 3 months ago
- Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents☆127Updated last year
- ☆41Updated last year
- ☆85Updated 2 years ago
- A set of utilities for running few-shot prompting experiments on large-language models☆123Updated 2 years ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 2 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆63Updated 10 months ago