laude-institute / terminal-benchLinks
A benchmark for LLMs on complicated tasks in the terminal
☆961Updated this week
Alternatives and similar repositories for terminal-bench
Users that are interested in terminal-bench are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆432Updated last week
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆339Updated last week
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆607Updated 7 months ago
- OpenAI Frontier Evals☆924Updated last week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆553Updated 2 months ago
- τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆366Updated this week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆688Updated 3 months ago
- ☆835Updated last month
- Post-training with Tinker☆1,096Updated last week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,101Updated this week
- ☆607Updated last month
- ☆246Updated last month
- A MemAgent framework that can be extrapolated to 3.5M, along with a training framework for RL training of any agent workflow.☆748Updated 2 months ago
- Code and Data for Tau-Bench☆901Updated 2 months ago
- An agent benchmark with tasks in a simulated software company.☆570Updated 2 weeks ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆440Updated last week
- 🐉 Loong: Synthesize Long CoTs at Scale through Verifiers.☆451Updated 3 weeks ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆218Updated this week
- An Open-Source Large-Scale Reinforcement Learning Project for Search Agents☆471Updated 2 weeks ago
- Automatic evals for LLMs☆550Updated 4 months ago
- Prompt-to-Leaderboard☆260Updated 5 months ago
- ☆1,309Updated last month
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,042Updated last week
- Code and implementations for the paper "AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcemen…☆453Updated last month
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆554Updated 5 months ago
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆775Updated last week
- Scaling RL on advanced reasoning models☆620Updated last week
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆268Updated this week
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆434Updated this week
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆321Updated 2 weeks ago