openai / SWELancer-BenchmarkLinks
This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?"
β1,439Updated 6 months ago
Alternatives and similar repositories for SWELancer-Benchmark
Users that are interested in SWELancer-Benchmark are comparing it to the libraries listed below
Sorting:
- OpenAI Frontier Evalsβ990Updated 2 months ago
- Agentlessπ±: an agentless approach to automatically solve software development problemsβ2,007Updated last year
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"β673Updated 10 months ago
- A benchmark for LLMs on complicated tasks in the terminalβ1,494Updated 2 weeks ago
- An agent benchmark with tasks in a simulated software company.β635Updated 2 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineeringβ1,295Updated 3 weeks ago
- The #1 open-source SWE-bench Verified implementationβ853Updated 7 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]β625Updated 6 months ago
- β624Updated 5 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?β4,232Updated this week
- OO for LLMsβ892Updated this week
- Code and Data for Tau-Benchβ1,087Updated 5 months ago
- E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can connect to any LLM for secure computer use.β1,240Updated this week
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.β424Updated last week
- Renderer for the harmony response format to be used with gpt-ossβ4,171Updated last month
- Darwin GΓΆdel Machine: Open-Ended Evolution of Self-Improving Agentsβ1,810Updated 5 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"β786Updated 6 months ago
- Code for "WebVoyager: WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models"β1,006Updated last year
- Windows Agent Arena (WAA) πͺ is a scalable OS platform for testing and benchmarking of multi-modal AI agents.β816Updated 9 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agentsβ538Updated this week
- End-to-end Generative Optimization for AI Agentsβ707Updated last month
- [ICLR 2025] Automated Design of Agentic Systemsβ1,506Updated last year
- Learn how to use CUA (our Computer Using Agent) via the API on multiple computer environments.β1,259Updated 9 months ago
- LDB: A Large Language Model Debugger via Verifying Runtime Execution Step by Step (ACL'24)β576Updated last year
- Open-source resources on agents for computer use.β398Updated 3 months ago
- Sky-T1: Train your own O1 preview model within $450β3,370Updated 6 months ago
- Humanity's Last Examβ1,323Updated 3 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmarkβ1,029Updated last week
- β2,568Updated last week
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.β246Updated this week