smallcloudai / refact-benchLinks
A benchmarking tool for evaluating AI coding assistants on real-world software engineering tasks from the SWE-Bench dataset.
☆61Updated 6 months ago
Alternatives and similar repositories for refact-bench
Users that are interested in refact-bench are comparing it to the libraries listed below
Sorting:
- ☆128Updated 6 months ago
- SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?☆228Updated last month
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆114Updated last year
- ☆59Updated 10 months ago
- Run SWE-bench evaluations remotely☆46Updated 4 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆125Updated last year
- LLM-based mutation testing☆11Updated 10 months ago
- Harness used to benchmark aider against SWE Bench benchmarks☆78Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graph☆237Updated 8 months ago
- ☆102Updated last year
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆391Updated this week
- PromptMII: Meta-Learning Instruction Induction for LLMs☆45Updated 3 weeks ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆79Updated last year
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆48Updated 3 months ago
- Live-SWE-agent: live, runtime self-evolving software engineering agent☆143Updated this week
- Coding problems used in aider's polyglot benchmark☆198Updated 11 months ago
- A Text-Based Environment for Interactive Debugging☆284Updated this week
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆64Updated this week
- Incremental Python parser for constrained generation of code by LLMs.☆18Updated last year
- A Python framework for building AI agent systems with robust task management in the form of a graph execution engine, inference capabilit…☆31Updated 6 months ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆57Updated 9 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated 2 months ago
- Code and Data artifact for NeurIPS 2023 paper - "Monitor-Guided Decoding of Code LMs with Static Analysis of Repository Context". `multis…☆277Updated last year
- ☆78Updated last year
- ☆40Updated 7 months ago
- Source code for paper: INTERVENOR : Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing☆28Updated last year
- Pivotal Token Search☆135Updated this week
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆182Updated 7 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆80Updated last year
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Updated 8 months ago