ServiceNow / WorkArenaLinks
WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks?
☆230Updated 2 weeks ago
Alternatives and similar repositories for WorkArena
Users that are interested in WorkArena are comparing it to the libraries listed below
Sorting:
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆509Updated 2 weeks ago
- Code for the paper 🌳 Tree Search for Language Model Agents☆219Updated last year
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆302Updated last month
- VisualWebArena is a benchmark for multimodal agents.☆431Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 10 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆238Updated last year
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆367Updated 2 months ago
- ☆217Updated last week
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆148Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆225Updated 7 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆418Updated 2 weeks ago
- AWM: Agent Workflow Memory☆389Updated last month
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆345Updated 3 weeks ago
- [NeurIPS 2022] 🛒WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents☆475Updated last year
- An Illusion of Progress? Assessing the Current State of Web Agents☆143Updated last month
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆120Updated 2 months ago
- ☆133Updated 3 months ago
- ☆41Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆625Updated 6 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆389Updated last year
- 🌎💪 BrowserGym, a Gym environment for web task automation☆1,099Updated last week
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆55Updated 6 months ago
- A simple unified framework for evaluating LLMs☆261Updated 9 months ago
- MiniWoB++: a web interaction benchmark for reinforcement learning☆366Updated 9 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆538Updated this week
- A benchmark that challenges language models to code solutions for scientific problems☆169Updated last week
- ☆236Updated 3 months ago
- Interaction-first method for generating demonstrations for web-agents on any website☆51Updated 9 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆205Updated last year