laude-institute / terminal-benchLinks
A benchmark for LLMs on complicated tasks in the terminal
☆1,350Updated 3 weeks ago
Alternatives and similar repositories for terminal-bench
Users that are interested in terminal-bench are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆514Updated this week
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆404Updated 2 weeks ago
- τ ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆631Updated last month
- Harbor is a framework for running agent evaluations and creating and using RL environments.☆381Updated this week
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆654Updated 10 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆762Updated 6 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆613Updated 5 months ago
- OpenAI Frontier Evals☆983Updated last month
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,456Updated this week
- Code and Data for Tau-Bench☆1,058Updated 4 months ago
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆308Updated last month
- Seed-Coder is a family of lightweight open-source code LLMs comprising base, instruct and reasoning models, developed by ByteDance Seed.☆725Updated 7 months ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆566Updated 8 months ago
- ☆1,376Updated 4 months ago
- ☆318Updated 4 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆237Updated this week
- Post-training with Tinker☆2,719Updated this week
- ☆618Updated 4 months ago
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆888Updated last week
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆410Updated 2 months ago
- ☆867Updated 4 months ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆469Updated 2 weeks ago
- An Open-Source Large-Scale Reinforcement Learning Project for Search Agents☆538Updated last month
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,276Updated this week
- A MemAgent framework that can be extrapolated to 3.5M, along with a training framework for RL training of any agent workflow.☆859Updated 5 months ago
- Async RL Training at Scale☆1,005Updated this week
- An agent benchmark with tasks in a simulated software company.☆622Updated 2 months ago
- The #1 open-source SWE-bench Verified implementation☆848Updated 7 months ago
- This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software E…☆1,437Updated 6 months ago
- ☆885Updated last month