openai / SWELancer-BenchmarkLinks
This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?"
☆1,438Updated 5 months ago
Alternatives and similar repositories for SWELancer-Benchmark
Users that are interested in SWELancer-Benchmark are comparing it to the libraries listed below
Sorting:
- OpenAI Frontier Evals☆966Updated 2 weeks ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,986Updated last year
- An agent benchmark with tasks in a simulated software company.☆611Updated last month
- Code and Data for Tau-Bench☆1,021Updated 3 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,236Updated last week
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆637Updated 9 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆607Updated 4 months ago
- The #1 open-source SWE-bench Verified implementation☆840Updated 6 months ago
- ☆617Updated 3 months ago
- Learn how to use CUA (our Computer Using Agent) via the API on multiple computer environments.☆1,242Updated 8 months ago
- Renderer for the harmony response format to be used with gpt-oss☆4,082Updated last week
- [ICLR 2025] Automated Design of Agentic Systems☆1,477Updated 10 months ago
- E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can connect to any LLM for secure computer use.☆1,186Updated last week
- Darwin Gödel Machine: Open-Ended Evolution of Self-Improving Agents☆1,772Updated 4 months ago
- OO for LLMs☆883Updated last week
- A benchmark for LLMs on complicated tasks in the terminal☆1,235Updated last week
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,502Updated last year
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆396Updated this week
- τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆572Updated last week
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆980Updated this week
- Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of multi-modal AI agents.☆802Updated 7 months ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆485Updated last week
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆4,017Updated last week
- 🌎💪 BrowserGym, a Gym environment for web task automation☆1,046Updated last week
- The 100 line AI agent that solves GitHub issues or helps you in your command line. Radically simple, no huge configs, no giant monorepo—b…☆2,343Updated last week
- Sky-T1: Train your own O1 preview model within $450☆3,363Updated 5 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,283Updated last week
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆487Updated last week
- AI computer use powered by open source LLMs and E2B Desktop Sandbox☆1,697Updated 6 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆745Updated 5 months ago