openai / SWELancer-BenchmarkLinks
This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?"
☆1,435Updated 2 months ago
Alternatives and similar repositories for SWELancer-Benchmark
Users that are interested in SWELancer-Benchmark are comparing it to the libraries listed below
Sorting:
- Releases from OpenAI Preparedness☆793Updated this week
- ☆2,157Updated last week
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,793Updated 6 months ago
- The #1 open-source SWE-bench Verified implementation☆761Updated last month
- An agent benchmark with tasks in a simulated software company.☆488Updated last week
- Darwin Gödel Machine: Open-Ended Evolution of Self-Improving Agents☆1,520Updated last month
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆567Updated 4 months ago
- Humanity's Last Exam☆925Updated last month
- Verifiers for LLM Reinforcement Learning☆1,543Updated this week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆800Updated 3 weeks ago
- E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can connect to any LLM for secure computer use.☆1,008Updated 2 weeks ago
- Kimi K2 is the large language model series developed by Moonshot AI team☆5,693Updated this week
- OO for LLMs☆815Updated this week
- SWE-bench [Multimodal]: Can Language Models Resolve Real-world Github Issues?☆3,173Updated 2 weeks ago
- Code and Data for Tau-Bench☆666Updated this week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆502Updated 2 months ago
- Sky-T1: Train your own O1 preview model within $450☆3,305Updated last week
- Learn how to use CUA (our Computer Using Agent) via the API on multiple computer environments.☆1,007Updated 2 months ago
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,296Updated last year
- Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of multi-modal AI agents.☆734Updated 2 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆823Updated this week
- Synthetic data curation for post-training and structured data extraction☆1,446Updated last week
- Self-Adapting Language Models☆697Updated last month
- MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.☆2,665Updated last week
- Agent Reinforcement Trainer: train multi-step agents for real-world tasks using GRPO. Give your agents on-the-job training. Reinforcement…☆3,016Updated this week
- Training Large Language Model to Reason in a Continuous Latent Space☆1,190Updated 5 months ago
- ☆1,166Updated 2 months ago
- Big & Small LLMs working together☆1,065Updated this week
- Atom of Thoughts for Markov LLM Test-Time Scaling☆579Updated last month
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆590Updated last week