ToolBeHonest / ToolBeHonestLinks
[EMNLP 2024] A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models.
☆20Updated last year
Alternatives and similar repositories for ToolBeHonest
Users that are interested in ToolBeHonest are comparing it to the libraries listed below
Sorting:
- Paper list and datasets for the paper: A Survey on Data Selection for LLM Instruction Tuning☆47Updated 2 weeks ago
- [ACL 2024] Making Long-Context Language Models Better Multi-Hop Reasoners☆19Updated last year
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated 2 years ago
- Collection of papers for scalable automated alignment.☆93Updated last year
- Self-Knowledge Guided Retrieval Augmentation for Large Language Models (EMNLP Findings 2023)☆28Updated 2 years ago
- [EMNLP 2023] Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts☆27Updated 2 years ago
- Enhancing contextual understanding in large language models through contrastive decoding☆20Updated last year
- ☆48Updated 2 years ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- self-adaptive in-context learning☆45Updated 2 years ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆151Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated 2 years ago
- [ACL 23] CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors☆40Updated last month
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆168Updated 2 years ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆73Updated 8 months ago
- ☆16Updated 2 years ago
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆50Updated last year
- Do Large Language Models Know What They Don’t Know?☆102Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- ☆87Updated 2 years ago
- ☆78Updated last year
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆46Updated last year
- ☆51Updated last year
- Evaluating the faithfulness of long-context language models☆30Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆81Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆119Updated last year
- Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"☆82Updated 2 years ago
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆36Updated last year
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆53Updated last year