zorazrw / awesome-tool-llmLinks
☆239Updated last year
Alternatives and similar repositories for awesome-tool-llm
Users that are interested in awesome-tool-llm are comparing it to the libraries listed below
Sorting:
- augmented LLM with self reflection☆132Updated last year
- A banchmark list for evaluation of large language models.☆140Updated last week
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆245Updated last month
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆160Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆346Updated last year
- Data and Code for Program of Thoughts [TMLR 2023]☆285Updated last year
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆263Updated last year
- Generative Judge for Evaluating Alignment☆245Updated last year
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆95Updated last year
- FireAct: Toward Language Agent Fine-tuning☆282Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆241Updated 10 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆185Updated 11 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆129Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated 10 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆171Updated 3 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆262Updated 2 months ago
- EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.☆134Updated last year
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆139Updated 10 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆179Updated 5 months ago
- Code implementation of synthetic continued pretraining☆129Updated 8 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆277Updated 2 years ago
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆231Updated last year
- [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?☆130Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆209Updated 3 months ago
- ☆96Updated 9 months ago
- Codes and Data for ACL 2024 Paper "Faithful Logical Reasoning via Symbolic Chain-of-Thought".☆191Updated last year
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆149Updated 10 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆155Updated 6 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆108Updated last month