apple / ToolSandboxLinks
☆217Updated last year
Alternatives and similar repositories for ToolSandbox
Users that are interested in ToolSandbox are comparing it to the libraries listed below
Sorting:
- Complex Function Calling Benchmark.☆139Updated 9 months ago
- Self-Reflection in LLM Agents: Effects on Problem-Solving Performance☆86Updated 11 months ago
- Beating the GAIA benchmark with Transformers Agents. 🚀☆138Updated 8 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated this week
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆564Updated 2 months ago
- AWM: Agent Workflow Memory☆335Updated 8 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆145Updated last year
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆290Updated this week
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆186Updated last month
- ☆239Updated last year
- ☆285Updated 3 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆355Updated last year
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆321Updated last week
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆144Updated 11 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆218Updated 4 months ago
- Code for the paper 🌳 Tree Search for Language Model Agents☆217Updated last year
- Benchmarking Chat Assistants on Long-Term Interactive Memory (ICLR 2025)☆240Updated 2 weeks ago
- Comprehensive benchmark for RAG☆226Updated 4 months ago
- ☆146Updated last week
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 2 months ago
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)☆187Updated last month
- DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents☆433Updated 2 months ago
- Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"☆229Updated 3 weeks ago
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆176Updated this week
- Code that accompanies the public release of the paper Lost in Conversation (https://arxiv.org/abs/2505.06120)☆175Updated 4 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆187Updated 7 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆163Updated last year
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆205Updated 4 months ago
- ☆122Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆240Updated 11 months ago