StonyBrookNLP / appworldLinks
π AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource Paper.
β367Updated 2 months ago
Alternatives and similar repositories for appworld
Users that are interested in appworld are comparing it to the libraries listed below
Sorting:
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasksβ260Updated 8 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]β389Updated last year
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike statβ¦β418Updated last week
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.β120Updated 2 months ago
- AWM: Agent Workflow Memoryβ387Updated last month
- A banchmark list for evaluation of large language models.β159Updated 2 weeks ago
- Code for the paper π³ Tree Search for Language Model Agentsβ219Updated last year
- β242Updated last year
- [ICLR 2025] Benchmarking Agentic Workflow Generationβ142Updated 11 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.β213Updated 9 months ago
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimizationβ192Updated last year
- [NeurIPS 2024] Agent Planning with World Knowledge Modelβ162Updated last year
- [NeurIPS 2022] πWebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agentsβ475Updated last year
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RLβ499Updated 7 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)β159Updated last year
- Resources for our paper: "Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training"β167Updated 3 months ago
- β328Updated 8 months ago
- A Comprehensive Benchmark for Software Development.β127Updated last year
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agentsβ133Updated 10 months ago
- An Illusion of Progress? Assessing the Current State of Web Agentsβ141Updated last month
- β224Updated 10 months ago
- VisualWebArena is a benchmark for multimodal agents.β431Updated last year
- Towards Large Multimodal Models as Visual Foundation Agentsβ256Updated 9 months ago
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]β148Updated last year
- A curated collection of LLM reasoning and planning resources, including key papers, limitations, benchmarks, and additional learning mateβ¦β308Updated 11 months ago
- β117Updated last year
- RewardBench: the first evaluation tool for reward models.β685Updated this week
- [ICLR 2026] Learning to Reason without External Rewardsβ388Updated last week
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019β¦β213Updated 3 months ago
- β275Updated 5 months ago