swe-bench / SWE-benchLinks
SWE-bench: Can Language Models Resolve Real-world Github Issues?
☆4,267Updated last week
Alternatives and similar repositories for SWE-bench
Users that are interested in SWE-bench are comparing it to the libraries listed below
Sorting:
- Agentless🐱: an agentless approach to automatically solve software development problems☆2,006Updated last year
- A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 37.3% tasks (pass@1) in SWE-be…☆3,053Updated 9 months ago
- This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software E…☆1,439Updated 6 months ago
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,683Updated 4 months ago
- [NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments☆2,552Updated last week
- SWE-agent takes a GitHub issue and tries to automatically fix it, using your LM of choice. It can also be employed for offensive cybersec…☆18,430Updated this week
- Official implementation for the paper: "Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering""☆3,922Updated last year
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,579Updated last year
- ☆4,346Updated 6 months ago
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,127Updated last year
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,581Updated last year
- Code repo for "WebArena: A Realistic Web Environment for Building Autonomous Agents"☆1,327Updated 2 months ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,151Updated 2 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆1,032Updated this week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,301Updated 3 weeks ago
- A library for advanced large language model reasoning☆2,328Updated 8 months ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,823Updated 3 months ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,822Updated last year
- LDB: A Large Language Model Debugger via Verifying Runtime Execution Step by Step (ACL'24)☆576Updated last year
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆1,127Updated 3 months ago
- AIOS: AI Agent Operating System☆5,060Updated 3 weeks ago
- ☆626Updated 5 months ago
- Code and Data for Tau-Bench☆1,087Updated 5 months ago
- [ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct☆2,076Updated last year
- Democratizing Reinforcement Learning for LLMs☆5,081Updated this week
- Optimizing inference proxy for LLMs☆3,317Updated 2 weeks ago
- AllenAI's post-training codebase☆3,573Updated this week
- An self-improving embodied conversational agent seamlessly integrated into the operating system to automate our daily tasks.☆1,748Updated last year
- Sky-T1: Train your own O1 preview model within $450☆3,370Updated 7 months ago
- A framework for prompt tuning using Intent-based Prompt Calibration☆2,927Updated 2 months ago