SWE-bench / SWE-benchLinks
SWE-bench: Can Language Models Resolve Real-world Github Issues?
☆4,232Updated last week
Alternatives and similar repositories for SWE-bench
Users that are interested in SWE-bench are comparing it to the libraries listed below
Sorting:
- Agentless🐱: an agentless approach to automatically solve software development problems☆2,006Updated last year
- A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 37.3% tasks (pass@1) in SWE-be…☆3,053Updated 9 months ago
- This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software E…☆1,439Updated 6 months ago
- ☆4,340Updated 6 months ago
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,683Updated 4 months ago
- SWE-agent takes a GitHub issue and tries to automatically fix it, using your LM of choice. It can also be employed for offensive cybersec…☆18,430Updated this week
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,579Updated last year
- The 100 line AI agent that solves GitHub issues or helps you in your command line. Radically simple, no huge configs, no giant monorepo—b…☆2,726Updated this week
- [ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct☆2,076Updated last year
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,301Updated 3 weeks ago
- Home of StarCoder2!☆2,036Updated last year
- Code and Data for Tau-Bench☆1,087Updated 5 months ago
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,114Updated last year
- LDB: A Large Language Model Debugger via Verifying Runtime Execution Step by Step (ACL'24)☆576Updated last year
- Official implementation for the paper: "Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering""☆3,922Updated last year
- 👨💻 An awesome and curated list of best code-LLM for research.☆1,277Updated last year
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆1,032Updated this week
- Code repo for "WebArena: A Realistic Web Environment for Building Autonomous Agents"☆1,318Updated 2 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆786Updated 6 months ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,151Updated 2 months ago
- A benchmark for LLMs on complicated tasks in the terminal☆1,494Updated 2 weeks ago
- [NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments☆2,552Updated this week
- ☆4,112Updated last year
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,581Updated last year
- [NeurIPS 2023] Reflexion: Language Agents with Verbal Reinforcement Learning☆3,059Updated last year
- ☆624Updated 5 months ago
- Renderer for the harmony response format to be used with gpt-oss☆4,171Updated last month
- ☆2,577Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,167Updated last year
- Arena-Hard-Auto: An automatic LLM benchmark.☆994Updated 7 months ago