openai / SWELancer-BenchmarkLinks
This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?"
☆1,439Updated 4 months ago
Alternatives and similar repositories for SWELancer-Benchmark
Users that are interested in SWELancer-Benchmark are comparing it to the libraries listed below
Sorting:
- OpenAI Frontier Evals☆942Updated 3 weeks ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,956Updated 11 months ago
- An agent benchmark with tasks in a simulated software company.☆582Updated this week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,178Updated last week
- The #1 open-source SWE-bench Verified implementation☆833Updated 5 months ago
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆618Updated 8 months ago
- Code and Data for Tau-Bench☆947Updated 2 months ago
- A benchmark for LLMs on complicated tasks in the terminal☆1,109Updated this week
- [ICLR 2025] Automated Design of Agentic Systems☆1,460Updated 9 months ago
- OO for LLMs☆875Updated this week
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,449Updated last year
- E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can connect to any LLM for secure computer use.☆1,146Updated 2 weeks ago
- Darwin Gödel Machine: Open-Ended Evolution of Self-Improving Agents☆1,741Updated 3 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆3,825Updated last week
- Learn how to use CUA (our Computer Using Agent) via the API on multiple computer environments.☆1,214Updated 6 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆576Updated 3 months ago
- Synthetic data curation for post-training and structured data extraction☆1,553Updated 3 months ago
- ☆612Updated 2 months ago
- ☆2,432Updated 3 weeks ago
- Post-training with Tinker☆2,148Updated this week
- The 100 line AI agent that solves GitHub issues or helps you in your command line. Radically simple, no huge configs, no giant monorepo—b…☆2,095Updated this week
- Renderer for the harmony response format to be used with gpt-oss☆4,020Updated 2 weeks ago
- [NeurIPS 2025] Atom of Thoughts for Markov LLM Test-Time Scaling☆596Updated 5 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆928Updated last week
- Environments for LLM Reinforcement Learning☆3,495Updated this week
- Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of multi-modal AI agents.☆788Updated 6 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆463Updated this week
- Testing baseline LLMs performance across various models☆322Updated last week
- Optimize prompts, code, and more with AI-powered Reflective Text Evolution☆1,593Updated last week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆715Updated 4 months ago