This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?"
☆1,439Jul 18, 2025Updated 8 months ago
Alternatives and similar repositories for SWELancer-Benchmark
Users that are interested in SWELancer-Benchmark are comparing it to the libraries listed below
Sorting:
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆4,478Updated this week
- Agentless🐱: an agentless approach to automatically solve software development problems☆2,019Dec 22, 2024Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆650Jul 29, 2025Updated 7 months ago
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆678Mar 16, 2025Updated last year
- OpenAI Frontier Evals☆1,136Mar 4, 2026Updated 2 weeks ago
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆326Dec 18, 2025Updated 3 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,381Updated this week
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆597Updated this week
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆255Feb 27, 2026Updated 3 weeks ago
- SWE-agent takes a GitHub issue and tries to automatically fix it, using your LM of choice. It can also be employed for offensive cybersec…☆18,730Mar 9, 2026Updated last week
- Democratizing Reinforcement Learning for LLMs☆5,219Mar 13, 2026Updated last week
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆587Aug 10, 2025Updated 7 months ago
- Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.☆21,189Mar 11, 2025Updated last year
- Commit0: Library Generation from Scratch☆187Feb 24, 2026Updated 3 weeks ago
- ☆4,398Jul 31, 2025Updated 7 months ago
- 👩⚖️ Agent-as-a-Judge: The Magic for Open-Endedness☆729May 14, 2025Updated 10 months ago
- ☆131Jun 6, 2025Updated 9 months ago
- Sky-T1: Train your own O1 preview model within $450☆3,369Jul 12, 2025Updated 8 months ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆254Jul 13, 2025Updated 8 months ago
- ☆631Sep 1, 2025Updated 6 months ago
- Fully open reproduction of DeepSeek-R1☆25,941Nov 24, 2025Updated 3 months ago
- 🙌 OpenHands: AI-Driven Development☆69,254Updated this week
- s1: Simple test-time scaling☆6,642Jun 25, 2025Updated 8 months ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆569May 6, 2025Updated 10 months ago
- A lightweight, powerful framework for multi-agent workflows☆19,975Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,919Updated this week
- Muon is Scalable for LLM Training☆1,444Aug 3, 2025Updated 7 months ago
- Minimal reproduction of DeepSeek R1-Zero☆12,932Feb 27, 2026Updated 3 weeks ago
- Lightweight coding agent that runs in your terminal☆65,974Updated this week
- This is a simple demonstration of more advanced, agentic patterns built on top of the Realtime API.☆6,797Jan 7, 2026Updated 2 months ago
- SWE-PolyBench: A multi-language benchmark for repository level evaluation of coding agents☆81Feb 6, 2026Updated last month
- Reproducing R1 for Code with Reliable Rewards☆295May 5, 2025Updated 10 months ago
- [NeurIPS 2025 D&B] 🚀 SWE-bench Goes Live!☆170Mar 9, 2026Updated last week
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆141Apr 20, 2025Updated 11 months ago
- A benchmark for LLMs on complicated tasks in the terminal☆1,732Jan 22, 2026Updated last month
- DSPy: The framework for programming—not prompting—language models☆32,853Updated this week
- 🤗 smolagents: a barebones library for agents that think in code.☆26,124Mar 13, 2026Updated last week
- A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 37.3% tasks (pass@1) in SWE-be…☆3,062Apr 24, 2025Updated 10 months ago
- [NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments☆2,667Mar 13, 2026Updated last week