apple / ToolSandboxLinks
☆224Updated last week
Alternatives and similar repositories for ToolSandbox
Users that are interested in ToolSandbox are comparing it to the libraries listed below
Sorting:
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated 3 weeks ago
- Complex Function Calling Benchmark.☆147Updated 9 months ago
- Beating the GAIA benchmark with Transformers Agents. 🚀☆138Updated 8 months ago
- Comprehensive benchmark for RAG☆237Updated 5 months ago
- Self-Reflection in LLM Agents: Effects on Problem-Solving Performance☆89Updated 11 months ago
- AWM: Agent Workflow Memory☆353Updated 9 months ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆578Updated 2 months ago
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆307Updated this week
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆149Updated last year
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆348Updated this week
- ☆239Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆244Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆360Updated last year
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆183Updated 2 weeks ago
- Code for the paper 🌳 Tree Search for Language Model Agents☆217Updated last year
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆189Updated 2 months ago
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆206Updated 4 months ago
- ☆293Updated 3 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆219Updated 5 months ago
- Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"☆232Updated last month
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 3 months ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆107Updated 2 weeks ago
- Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".☆242Updated last year
- ☆152Updated last month
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)☆188Updated 2 months ago
- DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents☆465Updated 3 months ago
- Official Code Repository for the paper "Distilling LLM Agent into Small Models with Retrieval and Code Tools"☆173Updated 3 weeks ago
- WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting.☆53Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆164Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆207Updated 11 months ago