openai / SWELancer-BenchmarkLinks
This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?"
☆1,438Updated 4 months ago
Alternatives and similar repositories for SWELancer-Benchmark
Users that are interested in SWELancer-Benchmark are comparing it to the libraries listed below
Sorting:
- OpenAI Frontier Evals☆957Updated last week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,209Updated 2 weeks ago
- An agent benchmark with tasks in a simulated software company.☆601Updated 3 weeks ago
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆627Updated 8 months ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,979Updated 11 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆3,941Updated last week
- Renderer for the harmony response format to be used with gpt-oss☆4,077Updated last month
- A benchmark for LLMs on complicated tasks in the terminal☆1,196Updated last week
- The #1 open-source SWE-bench Verified implementation☆839Updated 6 months ago
- Sky-T1: Train your own O1 preview model within $450☆3,358Updated 5 months ago
- Code and Data for Tau-Bench☆1,001Updated 3 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆736Updated 4 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆595Updated 4 months ago
- OO for LLMs☆880Updated this week
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆485Updated this week
- ☆615Updated 3 months ago
- ☆2,477Updated last month
- Synthetic data curation for post-training and structured data extraction☆1,572Updated 4 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆388Updated this week
- Open-source resources on agents for computer use.☆385Updated 2 months ago
- Darwin Gödel Machine: Open-Ended Evolution of Self-Improving Agents☆1,760Updated 4 months ago
- Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of multi-modal AI agents.☆795Updated 7 months ago
- The 100 line AI agent that solves GitHub issues or helps you in your command line. Radically simple, no huge configs, no giant monorepo—b…☆2,292Updated this week
- Optimize prompts, code, and more with AI-powered Reflective Text Evolution☆1,774Updated 3 weeks ago
- Post-training with Tinker☆2,357Updated this week
- Humanity's Last Exam☆1,256Updated 2 months ago
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,174Updated 10 months ago
- τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆525Updated this week
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,475Updated last year
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆576Updated 4 months ago