openai / mle-benchLinks
MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering
☆1,144Updated last week
Alternatives and similar repositories for mle-bench
Users that are interested in mle-bench are comparing it to the libraries listed below
Sorting:
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆616Updated 7 months ago
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆1,068Updated last week
- Code and Data for Tau-Bench☆935Updated 2 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆568Updated 3 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆573Updated 3 months ago
- ☆1,335Updated 2 months ago
- Recipes to scale inference-time compute of open models☆1,117Updated 5 months ago
- ☆1,035Updated 10 months ago
- OpenAI Frontier Evals☆937Updated 2 weeks ago
- [ICLR 2025] Automated Design of Agentic Systems☆1,459Updated 9 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,170Updated this week
- An agent benchmark with tasks in a simulated software company.☆581Updated last month
- Large Reasoning Models☆806Updated 11 months ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆578Updated 2 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,527Updated 5 months ago
- Code for Quiet-STaR☆741Updated last year
- ☆963Updated 9 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,214Updated last month
- [COLM 2025] LIMO: Less is More for Reasoning☆1,045Updated 3 months ago
- Code and implementations for the ACL 2025 paper "AgentGym: Evolving Large Language Model-based Agents across Diverse Environments" by Zhi…☆640Updated 2 months ago
- 🔍 Search-o1: Agentic Search-Enhanced Large Reasoning Models [EMNLP 2025]☆1,084Updated 2 months ago
- A project to improve skills of large language models☆608Updated this week
- ☆1,349Updated 11 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆703Updated 3 months ago
- ReCall: Learning to Reason with Tool Call for LLMs via Reinforcement Learning☆1,241Updated 5 months ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆463Updated last week
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆2,390Updated this week
- A library for advanced large language model reasoning☆2,300Updated 5 months ago
- Automatic evals for LLMs☆556Updated 4 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,327Updated 3 months ago