zorazrw / awesome-tool-llmLinks
☆242Updated last year
Alternatives and similar repositories for awesome-tool-llm
Users that are interested in awesome-tool-llm are comparing it to the libraries listed below
Sorting:
- A banchmark list for evaluation of large language models.☆154Updated 4 months ago
- augmented LLM with self reflection☆135Updated 2 years ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆267Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆102Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆171Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆132Updated last year
- Data and Code for Program of Thoughts [TMLR 2023]☆303Updated last year
- FireAct: Toward Language Agent Fine-tuning☆289Updated 2 years ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆264Updated 6 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆194Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆384Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆117Updated 5 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆144Updated 3 weeks ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year
- ☆166Updated 3 months ago
- Generative Judge for Evaluating Alignment☆248Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆182Updated 7 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆223Updated 7 months ago
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆354Updated 2 months ago
- EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.☆136Updated 2 years ago
- A Comprehensive Benchmark for Software Development.☆124Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated last year
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆237Updated 2 years ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆284Updated 2 years ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆145Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆324Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆193Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆165Updated last year