openai / mle-benchLinks
MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering
☆823Updated last month
Alternatives and similar repositories for mle-bench
Users that are interested in mle-bench are comparing it to the libraries listed below
Sorting:
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆573Updated 4 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆538Updated last week
- ☆1,028Updated 7 months ago
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆972Updated last week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆513Updated last week
- Code and Data for Tau-Bench☆713Updated 3 weeks ago
- ☆608Updated 3 weeks ago
- Code for Quiet-STaR☆735Updated 11 months ago
- Recipes to scale inference-time compute of open models☆1,110Updated 2 months ago
- Automatic evals for LLMs☆496Updated last month
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆513Updated this week
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆372Updated this week
- [ICLR 2025] Automated Design of Agentic Systems☆1,392Updated 6 months ago
- procedural reasoning datasets☆1,012Updated this week
- Large Reasoning Models☆804Updated 8 months ago
- An agent benchmark with tasks in a simulated software company.☆509Updated this week
- SkyRL: A Modular Full-stack RL Library for LLMs☆679Updated this week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆608Updated 2 weeks ago
- A project to improve skills of large language models☆501Updated this week
- AWM: Agent Workflow Memory☆297Updated 6 months ago
- ☆954Updated 6 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆321Updated 8 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,766Updated last week
- Training Large Language Model to Reason in a Continuous Latent Space☆1,224Updated 6 months ago
- Releases from OpenAI Preparedness☆815Updated this week
- System 2 Reasoning Link Collection☆848Updated 4 months ago
- [NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models☆649Updated last month
- An Open Large Reasoning Model for Real-World Solutions☆1,510Updated 2 months ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆288Updated last week
- [COLM 2025] LIMO: Less is More for Reasoning☆993Updated this week