smallcloudai / refact-benchLinks
A benchmarking tool for evaluating AI coding assistants on real-world software engineering tasks from the SWE-Bench dataset.
☆61Updated 5 months ago
Alternatives and similar repositories for refact-bench
Users that are interested in refact-bench are comparing it to the libraries listed below
Sorting:
- Harness used to benchmark aider against SWE Bench benchmarks☆78Updated last year
- ☆124Updated 5 months ago
- SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?☆217Updated last week
- ☆59Updated 10 months ago
- Run SWE-bench evaluations remotely☆44Updated 3 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆376Updated this week
- A Text-Based Environment for Interactive Debugging☆277Updated this week
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆56Updated 4 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆230Updated 7 months ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆77Updated last year
- Coding problems used in aider's polyglot benchmark☆193Updated 11 months ago
- ☆102Updated last year
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆68Updated last year
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆93Updated last month
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆114Updated last year
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆48Updated 2 months ago
- Agent computer interface for AI software engineer.☆114Updated 2 months ago
- Source code for paper: INTERVENOR : Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing