Aider-AI / aider-swe-benchLinks
Harness used to benchmark aider against SWE Bench benchmarks
☆78Updated last year
Alternatives and similar repositories for aider-swe-bench
Users that are interested in aider-swe-bench are comparing it to the libraries listed below
Sorting:
- ☆126Updated 6 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆225Updated this week
- ☆102Updated last year
- Aider's refactoring benchmark exercises based on popular python repos☆78Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graph☆232Updated 8 months ago
- Cognition's results and methodology on SWE-bench☆122Updated last year
- Run SWE-bench evaluations remotely☆44Updated 3 months ago
- Agent computer interface for AI software engineer.☆114Updated 2 months ago
- ☆159Updated last year
- ☆59Updated 10 months ago
- A system that tries to resolve all issues on a github repo with OpenHands.☆117Updated last year
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆382Updated this week
- ☆62Updated 5 months ago
- Coding problems used in aider's polyglot benchmark☆194Updated 11 months ago
- ☆67Updated 6 months ago
- Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents☆131Updated last year
- 🔔🧠 Easily experiment with popular language agents across diverse reasoning/decision-making benchmarks!☆54Updated 4 months ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆78Updated last year
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Updated 7 months ago
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆68Updated last year
- ☆102Updated last year
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 4 months ago
- accompanying material for sleep-time compute paper☆118Updated 7 months ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆93Updated 2 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆48Updated 2 months ago
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆136Updated 7 months ago
- Small, simple agent task environments for training and evaluation☆19Updated last year
- ☆41Updated last year
- Beating the GAIA benchmark with Transformers Agents. 🚀☆138Updated 9 months ago
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆182Updated 6 months ago