laude-institute / terminal-benchLinks
A benchmark for LLMs on complicated tasks in the terminal
☆854Updated this week
Alternatives and similar repositories for terminal-bench
Users that are interested in terminal-bench are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆414Updated last week
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆327Updated this week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆547Updated 2 months ago
- τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆327Updated last month
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆603Updated 6 months ago
- OpenAI Frontier Evals☆903Updated 2 weeks ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆670Updated 2 months ago
- Code and Data for Tau-Bench☆860Updated last month
- 🐉 Loong: Synthesize Long CoTs at Scale through Verifiers.☆448Updated last week
- An Open-Source Large-Scale Reinforcement Learning Project for Search Agents☆442Updated last week
- An agent benchmark with tasks in a simulated software company.☆556Updated 2 weeks ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆431Updated last month
- ☆598Updated last month
- Post-training with Tinker☆550Updated this week
- ☆218Updated 3 weeks ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆215Updated this week
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆549Updated 5 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆282Updated 2 weeks ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆950Updated this week
- A MemAgent framework that can be extrapolated to 3.5M, along with a training framework for RL training of any agent workflow.☆703Updated 2 months ago
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆258Updated this week
- Automatic evals for LLMs☆539Updated 3 months ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆414Updated this week
- open source interpretability platform 🧠☆432Updated this week
- Benchmarking Chat Assistants on Long-Term Interactive Memory (ICLR 2025)☆228Updated 2 weeks ago
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆751Updated last week
- ☆815Updated last month
- ☆280Updated 2 months ago
- ☆1,283Updated 3 weeks ago
- DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents☆406Updated 2 months ago