openai / mle-bench
MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering
☆492Updated last week
Related projects ⓘ
Alternatives and complementary repositories for mle-bench
- AIDE: the state-of-the-art machine learning engineer agent, generating machine learning solution code from natural language descriptions.☆563Updated last week
- AWM: Agent Workflow Memory☆203Updated last month
- ☆920Updated this week
- Code for Quiet-STaR☆641Updated 2 months ago
- Automated Design of Agentic Systems☆1,020Updated this week
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆334Updated 2 months ago
- ☆447Updated 2 weeks ago
- Code for Husky, an open-source language agent that solves complex, multi-step reasoning tasks. Husky v1 addresses numerical, tabular and …☆328Updated 4 months ago
- Official repository for ORPO☆419Updated 5 months ago
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"☆678Updated 3 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆801Updated 2 months ago
- Automatically evaluate your LLMs in Google Colab☆557Updated 6 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆436Updated last week
- Large Reasoning Models☆492Updated this week
- System 2 Reasoning Link Collection☆686Updated 2 weeks ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆795Updated 2 months ago
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆480Updated last week
- ☆311Updated last month
- An Analytical Evaluation Board of Multi-turn LLM Agents☆245Updated 5 months ago
- An Open Source Toolkit For LLM Distillation☆352Updated last month
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆788Updated this week
- ☆493Updated 3 weeks ago
- The official evaluation suite and dynamic data release for MixEval.☆222Updated this week
- ☆522Updated last month
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆739Updated last week
- [NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models☆532Updated 2 weeks ago
- Autonomous Agents (LLMs) research papers. Updated Daily.☆495Updated this week
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆484Updated 5 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆644Updated last month