THUNLP-MT / StableToolBench
A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.
☆146Updated 3 weeks ago
Alternatives and similar repositories for StableToolBench:
Users that are interested in StableToolBench are comparing it to the libraries listed below
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆182Updated 6 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆138Updated 6 months ago
- ☆150Updated 4 months ago
- ☆276Updated 9 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆306Updated 9 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆123Updated 11 months ago
- [ACL2024] Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios☆56Updated last year
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆86Updated last year
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆262Updated last year
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆90Updated 2 weeks ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆85Updated 9 months ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆135Updated 4 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆65Updated 5 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆118Updated last month
- ☆144Updated last month
- ☆163Updated last month
- Collection of papers for scalable automated alignment.☆89Updated 6 months ago
- ☆327Updated 3 months ago
- Generative Judge for Evaluating Alignment☆236Updated last year
- ☆132Updated 4 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆311Updated 11 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆52Updated 5 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆145Updated last month
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆123Updated 5 months ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆268Updated last year
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆242Updated 2 weeks ago
- [NeurIPS 2024] Agent Planning with World Knowledge Model☆131Updated 4 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆110Updated 9 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆208Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆124Updated 9 months ago