openai / mle-benchLinks
MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering
☆1,217Updated last week
Alternatives and similar repositories for mle-bench
Users that are interested in mle-bench are comparing it to the libraries listed below
Sorting:
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆1,091Updated last month
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆631Updated 9 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆581Updated 4 months ago
- Recipes to scale inference-time compute of open models☆1,120Updated 6 months ago
- Code and Data for Tau-Bench☆1,001Updated 3 months ago
- OpenAI Frontier Evals☆962Updated last week
- ☆1,357Updated 3 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,361Updated this week
- ☆1,036Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆595Updated 4 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆736Updated 5 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,276Updated this week
- [ICLR 2025] Automated Design of Agentic Systems☆1,473Updated 10 months ago
- [COLM 2025] LIMO: Less is More for Reasoning☆1,056Updated 4 months ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆590Updated 3 months ago
- ☆969Updated 10 months ago
- Large Reasoning Models☆806Updated last year
- Code for Quiet-STaR☆743Updated last year
- An agent benchmark with tasks in a simulated software company.☆601Updated last month
- 🔍 Search-o1: Agentic Search-Enhanced Large Reasoning Models [EMNLP 2025]☆1,119Updated last month
- An Open Large Reasoning Model for Real-World Solutions☆1,538Updated 6 months ago
- A project to improve skills of large language models☆665Updated this week
- ReCall: Learning to Reason with Tool Call for LLMs via Reinforcement Learning☆1,262Updated 7 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,393Updated 4 months ago
- Automatic evals for LLMs☆567Updated 5 months ago
- τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆525Updated last week
- End-to-end Generative Optimization for AI Agents☆682Updated last week
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆482Updated this week
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆441Updated last year
- Code and implementations for the ACL 2025 paper "AgentGym: Evolving Large Language Model-based Agents across Diverse Environments" by Zhi…☆665Updated 3 months ago