zorazrw / awesome-tool-llmLinks
☆241Updated last year
Alternatives and similar repositories for awesome-tool-llm
Users that are interested in awesome-tool-llm are comparing it to the libraries listed below
Sorting:
- augmented LLM with self reflection☆135Updated 2 years ago
- A banchmark list for evaluation of large language models.☆152Updated 3 months ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆264Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆165Updated last year
- Data and Code for Program of Thoughts [TMLR 2023]☆300Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆368Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆134Updated last year
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆324Updated 3 weeks ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆218Updated 5 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆192Updated last year
- ☆158Updated last month
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 5 months ago
- EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.☆136Updated last year
- FireAct: Toward Language Agent Fine-tuning☆286Updated 2 years ago
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆235Updated 2 years ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆100Updated last year
- [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?☆135Updated last year
- Generative Judge for Evaluating Alignment☆248Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Updated last year
- Critique-out-Loud Reward Models☆70Updated last year
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆179Updated 2 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆319Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆113Updated 4 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆179Updated 6 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆165Updated last year
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year