ServiceNow / AgentLabLinks
AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and reproducibility.
☆358Updated last week
Alternatives and similar repositories for AgentLab
Users that are interested in AgentLab are comparing it to the libraries listed below
Sorting:
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆285Updated this week
- WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks?☆195Updated last week
- 🌎💪 BrowserGym, a Gym environment for web task automation☆806Updated last week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆498Updated 2 months ago
- AWM: Agent Workflow Memory☆291Updated 5 months ago
- An agent benchmark with tasks in a simulated software company.☆488Updated last week
- VisualWebArena is a benchmark for multimodal agents.☆357Updated 8 months ago
- This is a collection of resources for computer-use GUI agents, including videos, blogs, papers, and projects.☆397Updated last month
- Code for the paper 🌳 Tree Search for Language Model Agents☆205Updated 11 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆221Updated 2 months ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆482Updated 3 weeks ago
- OS-ATLAS: A Foundation Action Model For Generalist GUI Agents☆356Updated 2 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆566Updated 3 months ago
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆420Updated last month
- Code and Data for Tau-Bench☆666Updated this week
- Scaling Data for SWE-agents☆293Updated this week
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆207Updated this week
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆223Updated 2 months ago
- [NeurIPS 2022] 🛒WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents☆364Updated 10 months ago
- WebLINX is a benchmark for building web navigation agents with conversational capabilities☆153Updated 5 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆242Updated last week
- ☆228Updated last month
- ☆490Updated 3 weeks ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆329Updated last year
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆795Updated 3 weeks ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆529Updated 2 weeks ago
- AndroidWorld is an environment and benchmark for autonomous agents☆354Updated 2 weeks ago
- Code for "WebVoyager: WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models"☆847Updated last year
- Official repo for paper DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning.☆366Updated 4 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆173Updated 4 months ago