zorazrw / awesome-tool-llm
☆217Updated 7 months ago
Alternatives and similar repositories for awesome-tool-llm:
Users that are interested in awesome-tool-llm are comparing it to the libraries listed below
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆137Updated 5 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆118Updated 4 months ago
- A banchmark list for evaluation of large language models.☆92Updated 3 weeks ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆297Updated 10 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆179Updated 5 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆219Updated 5 months ago
- augmented LLM with self reflection☆117Updated last year
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆140Updated this week
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆230Updated last month
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆122Updated 9 months ago
- ☆144Updated 3 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆183Updated this week
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆124Updated 8 months ago
- FireAct: Toward Language Agent Fine-tuning☆274Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆102Updated last month
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆85Updated last year
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆170Updated last week
- A Comprehensive Survey on Long Context Language Modeling☆113Updated last week
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆313Updated 6 months ago
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆218Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆109Updated 8 months ago
- Reformatted Alignment☆115Updated 6 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆108Updated last month
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆139Updated last week
- [NeurIPS 2024] Agent Planning with World Knowledge Model☆121Updated 3 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆150Updated last year
- Reproducing R1 for Code with Reliable Rewards☆140Updated 3 weeks ago
- Conifer: Improving Complex Constrained Instruction-Following Ability of Large Language Models☆87Updated 11 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …