MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering
☆1,329Feb 26, 2026Updated this week
Alternatives and similar repositories for mle-bench
Users that are interested in mle-bench are comparing it to the libraries listed below
Sorting:
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆1,140Feb 12, 2026Updated 2 weeks ago
- AIDE: the Machine Learning CodeGen Agent☆25Oct 7, 2024Updated last year
- ☆330Jun 19, 2024Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆644Jul 29, 2025Updated 7 months ago
- The official implementation of "ML-Master: Towards AI-for-AI via Integration of Exploration and Reasoning"☆368Jan 16, 2026Updated last month
- OpenAI Frontier Evals☆1,099Feb 18, 2026Updated last week
- ☆4,368Jul 31, 2025Updated 7 months ago
- ☆90Oct 30, 2025Updated 4 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆4,385Feb 19, 2026Updated last week
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆585Aug 10, 2025Updated 6 months ago
- This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software E…☆1,439Jul 18, 2025Updated 7 months ago
- AllenAI's post-training codebase☆3,605Updated this week
- Democratizing Reinforcement Learning for LLMs☆5,167Updated this week
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆678Mar 16, 2025Updated 11 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,519Updated this week
- A benchmark for LLMs on complicated tasks in the terminal☆1,614Jan 22, 2026Updated last month
- Code and Data for Tau-Bench☆1,103Aug 28, 2025Updated 6 months ago
- Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.☆21,026Mar 11, 2025Updated 11 months ago
- [ICLR 2025] DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆106Aug 17, 2025Updated 6 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,532Feb 13, 2026Updated 2 weeks ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆208Jul 19, 2024Updated last year
- ☆1,344Nov 21, 2024Updated last year
- [ICLR 2025] Automated Design of Agentic Systems☆1,522Jan 28, 2025Updated last year
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,187Feb 8, 2026Updated 3 weeks ago
- The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery 🧑🔬☆12,216Dec 19, 2025Updated 2 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,628Updated this week
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,833Jan 17, 2025Updated last year
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.☆6,888Dec 17, 2025Updated 2 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Feb 21, 2026Updated last week
- A library for advanced large language model reasoning☆2,333Jun 10, 2025Updated 8 months ago
- ☆284Dec 4, 2024Updated last year
- Agentless🐱: an agentless approach to automatically solve software development problems☆2,010Dec 22, 2024Updated last year
- Can Language Models Solve Olympiad Programming?☆123Jan 14, 2025Updated last year
- Official implementation of "DS-Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning" in ICML'24☆226Dec 3, 2024Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆692Jan 20, 2025Updated last year
- O1 Replication Journey☆1,999Jan 14, 2025Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,094Jun 1, 2023Updated 2 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,953Aug 9, 2025Updated 6 months ago