openai / mle-benchLinks
MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering
☆1,263Updated 2 weeks ago
Alternatives and similar repositories for mle-bench
Users that are interested in mle-bench are comparing it to the libraries listed below
Sorting:
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆1,106Updated 2 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆583Updated 4 months ago
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆650Updated 9 months ago
- ☆1,377Updated 3 months ago
- Code and Data for Tau-Bench☆1,037Updated 4 months ago
- ☆1,035Updated last year
- Recipes to scale inference-time compute of open models☆1,123Updated 7 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆609Updated 5 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,415Updated this week
- [COLM 2025] LIMO: Less is More for Reasoning☆1,059Updated 5 months ago
- OpenAI Frontier Evals☆971Updated last month
- [ICLR 2025] Automated Design of Agentic Systems☆1,480Updated 11 months ago
- ☆969Updated 11 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,430Updated 4 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,296Updated 3 weeks ago
- An agent benchmark with tasks in a simulated software company.☆617Updated last month
- Large Reasoning Models☆806Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,536Updated 7 months ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆596Updated 4 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆753Updated 5 months ago
- Code for Quiet-STaR☆742Updated last year
- ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning & ReCall: Learning to Reason with Tool Call for LLMs via Rei…☆1,277Updated 7 months ago
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆2,467Updated this week
- 👩⚖️ Agent-as-a-Judge: The Magic for Open-Endedness☆696Updated 7 months ago
- A project to improve skills of large language models☆734Updated this week
- Automatic evals for LLMs☆570Updated 2 weeks ago
- τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆583Updated 2 weeks ago
- Code and implementations for the ACL 2025 paper "AgentGym: Evolving Large Language Model-based Agents across Diverse Environments" by Zhi…☆675Updated 3 months ago
- A library for advanced large language model reasoning☆2,319Updated 6 months ago
- ☆1,344Updated last year