openai / frontier-evalsLinks
OpenAI Frontier Evals
☆994Updated 2 months ago
Alternatives and similar repositories for frontier-evals
Users that are interested in frontier-evals are comparing it to the libraries listed below
Sorting:
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,301Updated 3 weeks ago
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆675Updated 10 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆786Updated 6 months ago
- A benchmark for LLMs on complicated tasks in the terminal☆1,494Updated 2 weeks ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆627Updated 6 months ago
- ☆1,388Updated 4 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,496Updated 5 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆538Updated last week
- An agent benchmark with tasks in a simulated software company.☆635Updated 2 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,547Updated this week
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆509Updated 3 weeks ago
- This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software E…☆1,439Updated 6 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆583Updated 6 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆863Updated last month
- τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆717Updated last week
- ☆874Updated 5 months ago
- Code and Data for Tau-Bench☆1,087Updated 5 months ago
- [COLM 2025] LIMO: Less is More for Reasoning☆1,062Updated 6 months ago
- Harbor is a framework for running agent evaluations and creating and using RL environments.☆542Updated last week
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,332Updated 3 weeks ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆427Updated 2 weeks ago
- Post-training with Tinker☆2,805Updated last week
- A project to improve skills of large language models☆813Updated this week
- [NeurIPS 2025] Atom of Thoughts for Markov LLM Test-Time Scaling☆641Updated this week
- 👩⚖️ Agent-as-a-Judge: The Magic for Open-Endedness☆720Updated 8 months ago
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆466Updated last year
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆627Updated last week
- Repository for Zochi's Research☆300Updated 2 months ago
- Testing baseline LLMs performance across various models☆336Updated last week
- ☆331Updated 6 months ago