openai / mle-bench
MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering
☆692Updated 3 weeks ago
Alternatives and similar repositories for mle-bench:
Users that are interested in mle-bench are comparing it to the libraries listed below
- AIDE: AI-Driven Exploration in the Space of Code. State of the Art machine Learning engineering agents that automates AI R&D.☆884Updated 2 weeks ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆512Updated last month
- ☆1,017Updated 4 months ago
- Code for Quiet-STaR☆731Updated 8 months ago
- Recipes to scale inference-time compute of open models☆1,066Updated 2 months ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆322Updated this week
- Code and Data for Tau-Bench☆472Updated 3 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆487Updated 3 weeks ago
- ReCall: Learning to Reason with Tool Call for LLMs via Reinforcement Learning☆808Updated last week
- AWM: Agent Workflow Memory☆269Updated 3 months ago
- A bibliography and survey of the papers surrounding o1☆1,191Updated 5 months ago
- Large Reasoning Models☆804Updated 5 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆450Updated last month
- [ICLR 2025] Automated Design of Agentic Systems☆1,283Updated 3 months ago
- Verifiers for LLM Reinforcement Learning☆881Updated last month
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆425Updated 3 weeks ago
- Code and implementations for the paper "AgentGym: Evolving Large Language Model-based Agents across Diverse Environments" by Zhiheng Xi e…☆458Updated last month
- System 2 Reasoning Link Collection☆828Updated last month
- ☆924Updated 3 months ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,656Updated 4 months ago
- LIMO: Less is More for Reasoning☆927Updated last month
- Code for the paper 🌳 Tree Search for Language Model Agents☆197Updated 9 months ago
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"☆751Updated 9 months ago
- RewardBench: the first evaluation tool for reward models.☆562Updated this week
- ☆524Updated 3 weeks ago
- Automatic evals for LLMs☆376Updated this week
- Search-o1: Agentic Search-Enhanced Large Reasoning Models☆839Updated last month
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,500Updated this week
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆264Updated this week
- Autonomous Agents (LLMs) research papers. Updated Daily.☆788Updated last week