princeton-nlp / WebShopLinks
[NeurIPS 2022] πWebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents
β384Updated 11 months ago
Alternatives and similar repositories for WebShop
Users that are interested in WebShop are comparing it to the libraries listed below
Sorting:
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]β340Updated last year
- VisualWebArena is a benchmark for multimodal agents.β367Updated 9 months ago
- ICML 2024: Improving Factuality and Reasoning in Language Models through Multiagent Debateβ459Updated 3 months ago
- π Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Papβ¦β238Updated last week
- SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasksβ311Updated 10 months ago
- An extensible benchmark for evaluating large language models on planningβ397Updated last month
- FireAct: Toward Language Agent Fine-tuningβ282Updated last year
- β183Updated 6 months ago
- Code for the paper π³ Tree Search for Language Model Agentsβ211Updated last year
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"β775Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.β158Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898β223Updated last year
- Repo for paper "Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration"β344Updated last year
- Code for Arxiv 2023: Improving Language Model Negociation with Self-Play and In-Context Learning from AI Feedbackβ208Updated 2 years ago
- Data and Code for Program of Thoughts [TMLR 2023]β282Updated last year
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)β368Updated this week
- Codes for our paper "ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate"β295Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.β624Updated 2 months ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.β724Updated 10 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)β245Updated last year
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and trainingβ279Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Themβ507Updated last year
- Official implementation of TMLR paper "Cumulative Reasoning With Large Language Models" (https://arxiv.org/abs/2308.04371)β298Updated 3 weeks ago
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimizationβ165Updated last year
- Paper collection on building and evaluating language model agents via executable language groundingβ361Updated last year
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Languβ¦β353Updated 2 years ago
- ALFWorld: Aligning Text and Embodied Environments for Interactive Learningβ507Updated last month
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)β264Updated last year
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels β¦β274Updated 2 years ago
- An implemtation of Everyting of Thoughts (XoT).β148Updated last year