openai / mle-benchLinks
MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering
☆728Updated 2 weeks ago
Alternatives and similar repositories for mle-bench
Users that are interested in mle-bench are comparing it to the libraries listed below
Sorting:
- AIDE: AI-Driven Exploration in the Space of Code. State of the Art machine Learning engineering agents that automates AI R&D.☆912Updated last month
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆477Updated 3 weeks ago
- Recipes to scale inference-time compute of open models☆1,087Updated last week
- ☆1,024Updated 5 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆530Updated 2 months ago
- Code for Quiet-STaR☆732Updated 9 months ago
- Verifiers for LLM Reinforcement Learning☆1,057Updated this week
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"☆753Updated 10 months ago
- ☆554Updated last month
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆705Updated 2 months ago
- ☆934Updated 4 months ago
- Code and Data for Tau-Bench☆528Updated 4 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆499Updated 3 weeks ago
- Large Reasoning Models☆804Updated 6 months ago
- [ICLR 2025] Automated Design of Agentic Systems☆1,315Updated 4 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,135Updated 4 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,574Updated this week
- Official repository for ORPO☆453Updated last year
- SkyRL-v0: Train Real-World Long-Horizon Agents via Reinforcement Learning☆343Updated last week
- Automatic evals for LLMs☆399Updated this week
- ReCall: Learning to Reason with Tool Call for LLMs via Reinforcement Learning☆888Updated 2 weeks ago
- [NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models☆635Updated 2 months ago
- ☆517Updated 6 months ago
- [ICML 2025 Spotlight] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆525Updated 3 weeks ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆448Updated last week
- procedural reasoning datasets☆625Updated this week
- AWM: Agent Workflow Memory☆271Updated 4 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆948Updated last month
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆309Updated 6 months ago
- RewardBench: the first evaluation tool for reward models.☆590Updated this week