zorazrw / awesome-tool-llmLinks
☆240Updated last year
Alternatives and similar repositories for awesome-tool-llm
Users that are interested in awesome-tool-llm are comparing it to the libraries listed below
Sorting:
- augmented LLM with self reflection☆133Updated last year
- A banchmark list for evaluation of large language models.☆143Updated 3 weeks ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆354Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆212Updated 3 months ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆262Updated last year
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆250Updated last month
- Benchmarking LLMs with Challenging Tasks from Real Users☆241Updated 11 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆162Updated last year
- FireAct: Toward Language Agent Fine-tuning☆281Updated last year
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆96Updated last year
- Data and Code for Program of Thoughts [TMLR 2023]☆286Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆130Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 2 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆184Updated 11 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆109Updated 2 months ago
- Code implementation of synthetic continued pretraining☆133Updated 9 months ago
- Generative Judge for Evaluating Alignment☆246Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated 11 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆171Updated 4 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆112Updated last week
- EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.☆135Updated last year
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆99Updated last week
- ☆96Updated 9 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆121Updated last year
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆129Updated 7 months ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆163Updated last year
- [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?☆131Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆153Updated last year
- A simple unified framework for evaluating LLMs☆250Updated 5 months ago