swe-bench / SWE-benchLinks
SWE-bench [Multimodal]: Can Language Models Resolve Real-world Github Issues?
☆3,115Updated this week
Alternatives and similar repositories for SWE-bench
Users that are interested in SWE-bench are comparing it to the libraries listed below
Sorting:
- A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 37.3% tasks (pass@1) in SWE-be…☆2,961Updated 2 months ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,769Updated 6 months ago
- Official implementation for the paper: "Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering""☆3,862Updated 7 months ago
- ☆3,764Updated last month
- SWE-agent takes a GitHub issue and tries to automatically fix it, using your LM of choice. It can also be employed for offensive cybersec…☆16,552Updated this week
- [NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments☆1,956Updated this week
- This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software E…☆1,417Updated last month
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,231Updated 3 months ago
- A code-first agent framework for seamlessly planning and executing data analytics tasks.☆5,788Updated last month
- [ICLR 2025] Automated Design of Agentic Systems☆1,351Updated 5 months ago
- A library for advanced large language model reasoning☆2,159Updated 3 weeks ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆780Updated 2 weeks ago
- An self-improving embodied conversational agent seamlessly integrated into the operating system to automate our daily tasks.☆1,662Updated 9 months ago
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,499Updated last month
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,274Updated last year
- AIOS: AI Agent Operating System☆4,305Updated 3 weeks ago
- ☆2,099Updated last week
- Code and Data for Tau-Bench☆637Updated 5 months ago
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,071Updated 10 months ago
- PyTorch native post-training library☆5,296Updated this week
- Streamlines and simplifies prompt design for both developers and non-technical users with a low code approach.☆1,084Updated last week
- LDB: A Large Language Model Debugger via Verifying Runtime Execution Step by Step (ACL'24)☆547Updated 9 months ago
- Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models☆2,766Updated 5 months ago
- A unified evaluation framework for large language models☆2,656Updated last month
- Code for the paper "Evaluating Large Language Models Trained on Code"☆2,805Updated 5 months ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,707Updated 11 months ago
- Optimizing inference proxy for LLMs☆2,589Updated this week
- Set of tools to assess and improve LLM security.☆3,545Updated this week
- Tools for merging pretrained large language models.☆5,937Updated 2 weeks ago
- Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"☆2,405Updated 6 months ago