hongbinye / Cognitive-Mirage-Hallucinations-in-LLMs
Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"
☆47Updated 10 months ago
Related projects: ⓘ
- The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆62Updated 3 weeks ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆82Updated 2 months ago
- ☆32Updated 8 months ago
- Paper list and datasets for the paper: A Survey on Data Selection for LLM Instruction Tuning☆27Updated 7 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆51Updated 5 months ago
- ☆57Updated last year
- Do Large Language Models Know What They Don’t Know?☆84Updated 9 months ago
- ☆61Updated 3 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆45Updated 5 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆56Updated 6 months ago
- Paper list of "The Life Cycle of Knowledge in Big Language Models: A Survey"☆61Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆134Updated 2 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆62Updated 7 months ago
- Contrastive Chain-of-Thought Prompting☆50Updated 10 months ago
- ☆71Updated 8 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆119Updated last year
- ☆26Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆96Updated last week
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆59Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆104Updated 2 months ago
- Scaling Sentence Embeddings with Large Language Models☆93Updated 5 months ago
- [ACL2024] Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios☆36Updated 5 months ago
- self-adaptive in-context learning☆42Updated last year
- [NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback☆35Updated 6 months ago
- ☆40Updated 7 months ago
- A Survey of Hallucination in Large Foundation Models☆48Updated 8 months ago
- ☆77Updated last year
- ☆26Updated last week
- A framework for editing the CoTs for better factuality☆39Updated 9 months ago
- ☆44Updated 3 weeks ago