openai / SWELancer-BenchmarkLinks
This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?"
☆1,436Updated 2 months ago
Alternatives and similar repositories for SWELancer-Benchmark
Users that are interested in SWELancer-Benchmark are comparing it to the libraries listed below
Sorting:
- Releases from OpenAI Preparedness☆860Updated 3 weeks ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,904Updated 8 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆957Updated this week
- Verifiers for LLM Reinforcement Learning☆3,057Updated this week
- Code and Data for Tau-Bench☆834Updated 3 weeks ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆596Updated 6 months ago
- ☆2,335Updated 2 weeks ago
- A benchmark for LLMs on complicated tasks in the terminal☆691Updated this week
- An agent benchmark with tasks in a simulated software company.☆546Updated 3 weeks ago
- Renderer for the harmony response format to be used with gpt-oss☆3,774Updated last month
- ☆593Updated 2 weeks ago
- Darwin Gödel Machine: Open-Ended Evolution of Self-Improving Agents☆1,656Updated last month
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆541Updated last month
- Sky-T1: Train your own O1 preview model within $450☆3,327Updated 2 months ago
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,379Updated last year
- The #1 open-source SWE-bench Verified implementation☆822Updated 3 months ago
- OO for LLMs☆849Updated this week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆653Updated 2 months ago
- Humanity's Last Exam☆1,098Updated last month
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆3,486Updated last week
- ☆1,233Updated this week
- Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of multi-modal AI agents.☆766Updated 4 months ago
- Scaling Data for SWE-agents☆399Updated this week
- E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can connect to any LLM for secure computer use.☆1,092Updated 2 weeks ago
- Synthetic data curation for post-training and structured data extraction☆1,500Updated last month
- Atom of Thoughts for Markov LLM Test-Time Scaling☆586Updated 3 months ago
- Learn how to use CUA (our Computer Using Agent) via the API on multiple computer environments.☆1,117Updated 4 months ago
- [ICLR 2025] Automated Design of Agentic Systems☆1,413Updated 7 months ago
- The 100 line AI agent that solves GitHub issues or helps you in your command line. Radically simple, no huge configs, no giant monorepo—b…☆1,630Updated last week
- procedural reasoning datasets☆1,102Updated this week