Junjie-Ye / RoTBenchLinks
[EMNLP 2024] RoTBench: A Multi-Level Benchmark for Evaluating the Robustness of Large Language Models in Tool Learning
☆14Updated 7 months ago
Alternatives and similar repositories for RoTBench
Users that are interested in RoTBench are comparing it to the libraries listed below
Sorting:
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆64Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆29Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- ☆102Updated 2 years ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆73Updated 7 months ago
- Self-Knowledge Guided Retrieval Augmentation for Large Language Models (EMNLP Findings 2023)☆28Updated 2 years ago
- Evaluate the Quality of Critique☆36Updated last year
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆15Updated 2 years ago
- This is the repository for paper "CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models"☆29Updated 2 years ago
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆101Updated last year
- [NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback☆42Updated last year
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"☆33Updated 6 months ago
- Contrastive Chain-of-Thought Prompting☆68Updated 2 years ago
- [EMNLP'24 (Main)] DRPO(Dynamic Rewarding with Prompt Optimization) is a tuning-free approach for self-alignment. DRPO leverages a search-…☆24Updated last year
- Official repository for ALT (ALignment with Textual feedback).☆10Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Companion code to https://arxiv.org/abs/2402.15491☆21Updated 3 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆134Updated last year
- Towards Systematic Measurement for Long Text Quality☆37Updated last year
- The official repo of "WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents"☆90Updated 3 months ago
- [ACL2024] Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios☆67Updated 5 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆52Updated 7 months ago
- Do Large Language Models Know What They Don’t Know?☆102Updated last year
- Methods and evaluation for aligning language models temporally☆30Updated last year
- ☆17Updated 10 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- AbstainQA, ACL 2024☆28Updated last year
- Official Code Repository for [AutoScale📈: Scale-Aware Data Mixing for Pre-Training LLMs] Published as a conference paper at **COLM 2025*…☆12Updated 5 months ago