openai / SWELancer-Benchmark
This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?"
β1,372Updated last month
Alternatives and similar repositories for SWELancer-Benchmark
Users that are interested in SWELancer-Benchmark are comparing it to the libraries listed below
Sorting:
- Releases from OpenAI Preparednessβ736Updated this week
- Agentlessπ±: an agentless approach to automatically solve software development problemsβ1,670Updated 4 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"β517Updated 2 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]β455Updated last week
- A curated list of resources about AI agents for Computer Use, including research papers, projects, frameworks, and tools.β1,229Updated last month
- E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can connect to any LLM for secure computer use.β817Updated this week
- Windows Agent Arena (WAA) πͺ is a scalable OS platform for testing and benchmarking of multi-modal AI agents.β699Updated 2 weeks ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineeringβ703Updated last week
- The #1 open-source SWE-bench Verified implementationβ630Updated last month
- [ICLR 2025] Automated Design of Agentic Systemsβ1,286Updated 3 months ago
- Training Large Language Model to Reason in a Continuous Latent Spaceβ1,109Updated 3 months ago
- Learn how to use CUA (our Computer Using Agent) via the API on multiple computer environments.β881Updated 3 weeks ago
- Verifiers for LLM Reinforcement Learningβ953Updated this week
- AI computer use powered by open source LLMs and E2B Desktop Sandboxβ1,147Updated 2 months ago
- This is a collection of resources for computer-use GUI agents, including videos, blogs, papers, and projects.β363Updated 3 weeks ago
- SWE-bench [Multimodal]: Can Language Models Resolve Real-world Github Issues?β2,911Updated last week
- β1,795Updated last month
- Atom of Thoughts for Markov LLM Test-Time Scalingβ563Updated this week
- An agent benchmark with tasks in a simulated software company.β350Updated this week
- βοΈ The First Coding Agent-as-a-Judgeβ484Updated last week
- A Python package that makes it easy for developers to create AI apps powered by various AI providers.β1,605Updated last month
- Code and Data for Tau-Benchβ485Updated 3 months ago
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhanβ¦β1,160Updated 11 months ago
- β3,323Updated last month
- Sidecar is the AI brains for the Aide editor and works alongside it, locally on your machineβ550Updated last week
- procedural reasoning datasetsβ580Updated this week
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse β¦β357Updated this week
- Code for "WebVoyager: WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models"β769Updated last year
- MLGym A New Framework and Benchmark for Advancing AI Research Agentsβ492Updated this week
- Open-source resources on agents for computer use.β332Updated 3 months ago