Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
☆1,082Sep 27, 2025Updated 7 months ago
Alternatives and similar repositories for llm-hallucination-survey
Users that are interested in llm-hallucination-survey are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- LLM hallucination paper list☆334Mar 11, 2024Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆578Feb 12, 2024Updated 2 years ago
- List of papers on hallucination detection in LLMs.☆1,080Apr 23, 2026Updated 2 weeks ago
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆341Mar 28, 2026Updated last month
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆554Jan 17, 2025Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆611Jun 26, 2024Updated last year
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆434Apr 13, 2025Updated last year
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,104Oct 5, 2023Updated 2 years ago
- Must-read Papers on Knowledge Editing for Large Language Models.☆1,229Jul 12, 2025Updated 9 months ago
- Aligning Large Language Models with Human: A Survey☆741Sep 11, 2023Updated 2 years ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆570Oct 28, 2024Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆575Jan 28, 2025Updated last year
- A Survey of Hallucination in Large Foundation Models☆56Jan 10, 2024Updated 2 years ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆49Oct 21, 2023Updated 2 years ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,804Apr 1, 2026Updated last month
- Paper List for In-context Learning 🌷☆874Oct 8, 2024Updated last year
- Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"☆82Jul 31, 2023Updated 2 years ago
- [ACL 2023] Reasoning with Language Model Prompting: A Survey☆1,004May 21, 2025Updated 11 months ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,602Apr 17, 2026Updated 3 weeks ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆150Mar 23, 2024Updated 2 years ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆911Jan 16, 2025Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆258Aug 21, 2025Updated 8 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,441Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 🍓☆3,605Apr 20, 2026Updated 2 weeks ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆1,016Sep 27, 2025Updated 7 months ago
- Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models☆818Apr 23, 2026Updated 2 weeks ago
- Do Large Language Models Know What They Don’t Know?☆103Nov 8, 2024Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆593Dec 9, 2024Updated last year
- [NIPS2023] RRHF & Wombat☆806Sep 22, 2023Updated 2 years ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Feb 27, 2024Updated 2 years ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,600Nov 24, 2025Updated 5 months ago
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆514Oct 9, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Paper collections of retrieval-based (augmented) language model.☆232May 24, 2024Updated last year
- ☆920Jul 24, 2024Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,771Aug 4, 2024Updated last year
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,358Dec 9, 2025Updated 5 months ago
- Latest Advances on Multimodal Large Language Models☆17,736May 1, 2026Updated last week
- The official GitHub page for the survey paper "A Survey of Large Language Models".☆12,153Mar 11, 2025Updated last year
- A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.☆6,906Dec 17, 2025Updated 4 months ago