smallcloudai / refact-benchLinks
A benchmarking tool for evaluating AI coding assistants on real-world software engineering tasks from the SWE-Bench dataset.
☆55Updated 3 months ago
Alternatives and similar repositories for refact-bench
Users that are interested in refact-bench are comparing it to the libraries listed below
Sorting:
- SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?☆107Updated this week
- ☆112Updated 3 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆321Updated this week
- A Text-Based Environment for Interactive Debugging☆266Updated this week
- Run SWE-bench evaluations remotely☆42Updated last month
- Coding problems used in aider's polyglot benchmark☆180Updated 9 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆215Updated 5 months ago
- ☆100Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆117Updated 10 months ago
- Harness used to benchmark aider against SWE Bench benchmarks☆76Updated last year
- Agent computer interface for AI software engineer.☆111Updated last week
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆112Updated 11 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆47Updated 2 weeks ago
- CodeMind is a generic framework for evaluating inductive code reasoning of LLMs. It is equipped with a static analysis component that ena…☆39Updated 5 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆407Updated this week
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆55Updated 6 months ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆89Updated 11 months ago
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆182Updated 4 months ago
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆55Updated 2 months ago
- Aider's refactoring benchmark exercises based on popular python repos☆77Updated 11 months ago
- ☆56Updated 7 months ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆74Updated last year
- EvoEval: Evolving Coding Benchmarks via LLM☆76Updated last year
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated this week
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆55Updated 4 months ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆316Updated 7 months ago
- ☆164Updated 3 weeks ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆213Updated last week
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆56Updated last week
- A framework for optimizing DSPy programs with RL☆182Updated this week