laude-institute / terminal-benchLinks
A benchmark for LLMs on complicated tasks in the terminal
☆1,069Updated this week
Alternatives and similar repositories for terminal-bench
Users that are interested in terminal-bench are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆451Updated this week
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆358Updated last week
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆616Updated 8 months ago
- τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆412Updated last week
- OpenAI Frontier Evals☆937Updated 2 weeks ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆573Updated 3 months ago
- ☆611Updated 2 months ago
- An agent benchmark with tasks in a simulated software company.☆581Updated last month
- Post-training with Tinker☆1,932Updated this week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆708Updated 4 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆223Updated 3 weeks ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆444Updated last month
- ☆263Updated 2 months ago
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆278Updated 2 weeks ago
- open source interpretability platform 🧠☆486Updated this week
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆348Updated last week
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆820Updated this week
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆469Updated last week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,202Updated this week
- 🐉 Loong: Synthesize Long CoTs at Scale through Verifiers.☆460Updated last month
- Enhancing AI Software Engineering with Repository-level Code Graph☆225Updated 7 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,144Updated last week
- Research code artifacts for Code World Model (CWM) including inference tools, reproducibility, and documentation.☆712Updated last month
- Automatic evals for LLMs☆556Updated 4 months ago
- ☆843Updated 2 months ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆558Updated 6 months ago
- Code and Data for Tau-Bench☆942Updated 2 months ago
- Coding problems used in aider's polyglot benchmark☆190Updated 10 months ago
- Seed-Coder is a family of lightweight open-source code LLMs comprising base, instruct and reasoning models, developed by ByteDance Seed.☆639Updated 5 months ago
- open-source coding LLM for software engineering tasks☆1,035Updated last month