openai / SWELancer-BenchmarkLinks
This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?"
☆1,436Updated 2 months ago
Alternatives and similar repositories for SWELancer-Benchmark
Users that are interested in SWELancer-Benchmark are comparing it to the libraries listed below
Sorting:
- OpenAI Frontier Evals☆903Updated 2 weeks ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,924Updated 9 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆985Updated last week
- An agent benchmark with tasks in a simulated software company.☆556Updated 2 weeks ago
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆603Updated 6 months ago
- ☆598Updated last month
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆3,613Updated 2 weeks ago
- Darwin Gödel Machine: Open-Ended Evolution of Self-Improving Agents☆1,684Updated last month
- The #1 open-source SWE-bench Verified implementation☆829Updated 4 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆547Updated 2 months ago
- Renderer for the harmony response format to be used with gpt-oss☆3,854Updated last month
- Code and Data for Tau-Bench☆860Updated last month
- ☆2,361Updated this week
- A benchmark for LLMs on complicated tasks in the terminal☆854Updated this week
- OO for LLMs☆856Updated last week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆670Updated 2 months ago
- Synthetic data curation for post-training and structured data extraction☆1,512Updated 2 months ago
- Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of multi-modal AI agents.☆769Updated 5 months ago
- Humanity's Last Exam☆1,117Updated 2 months ago
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,151Updated 8 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆882Updated last week
- Training Large Language Model to Reason in a Continuous Latent Space☆1,278Updated last month
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆327Updated this week
- Post-training with Tinker☆550Updated last week
- [ICLR 2025] Automated Design of Agentic Systems☆1,428Updated 8 months ago
- E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can connect to any LLM for secure computer use.☆1,103Updated 2 weeks ago
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,391Updated last year
- Optimize prompts, code, and more with AI-powered Reflective Text Evolution☆1,066Updated last week
- Environments for LLM Reinforcement Learning☆3,254Updated this week
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆1,042Updated 2 weeks ago