laude-institute / terminal-benchLinks
A benchmark for LLMs on complicated tasks in the terminal
☆1,162Updated this week
Alternatives and similar repositories for terminal-bench
Users that are interested in terminal-bench are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆479Updated this week
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆382Updated this week
- τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆503Updated this week
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆624Updated 8 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆589Updated 4 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆732Updated 4 months ago
- Post-training with Tinker☆2,313Updated this week
- OpenAI Frontier Evals☆951Updated this week
- ☆613Updated 3 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,325Updated this week
- Code and Data for Tau-Bench☆987Updated 3 months ago
- An agent benchmark with tasks in a simulated software company.☆592Updated 2 weeks ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆481Updated this week
- Seed-Coder is a family of lightweight open-source code LLMs comprising base, instruct and reasoning models, developed by ByteDance Seed.☆697Updated 6 months ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆453Updated last month
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆377Updated 3 weeks ago
- 🐉 Loong: Synthesize Long CoTs at Scale through Verifiers.☆471Updated 2 weeks ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,199Updated last week
- Automatic evals for LLMs☆559Updated 5 months ago
- ☆1,351Updated 2 months ago
- ☆279Updated 2 months ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆561Updated 7 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆225Updated this week
- Prompt-to-Leaderboard☆265Updated 6 months ago
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆286Updated last week
- 🌎💪 BrowserGym, a Gym environment for web task automation☆1,029Updated this week
- Synthetic data curation for post-training and structured data extraction☆1,564Updated 4 months ago
- Coding problems used in aider's polyglot benchmark☆194Updated 11 months ago
- A MemAgent framework that can be extrapolated to 3.5M, along with a training framework for RL training of any agent workflow.☆813Updated 4 months ago
- The #1 open-source SWE-bench Verified implementation☆839Updated 5 months ago