SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
☆602Jun 26, 2024Updated last year
Alternatives and similar repositories for selfcheckgpt
Users that are interested in selfcheckgpt are comparing it to the libraries listed below
Sorting:
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆554Feb 12, 2024Updated 2 years ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆415Apr 13, 2025Updated 10 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆63Dec 25, 2023Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆98Jul 25, 2023Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,076Sep 27, 2025Updated 5 months ago
- List of papers on hallucination detection in LLMs.☆1,053Jan 11, 2026Updated last month
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆541Jan 17, 2025Updated last year
- RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Langua…☆415May 16, 2025Updated 9 months ago
- ☆58Jun 30, 2023Updated 2 years ago
- FacTool: Factuality Detection in Generative AI☆913Aug 19, 2024Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆227Dec 2, 2024Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆30Mar 5, 2024Updated last year
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆258Feb 21, 2023Updated 3 years ago
- Interpretable unified language safety checking with large language models☆32Apr 15, 2023Updated 2 years ago
- ☆22Feb 3, 2024Updated 2 years ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆886Jan 16, 2025Updated last year
- ☆49Jan 7, 2024Updated 2 years ago
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆130Jul 10, 2024Updated last year
- Source code of our paper MIND, ACL 2024 Long Paper☆61Nov 14, 2025Updated 3 months ago
- Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"☆82Jul 31, 2023Updated 2 years ago
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆511Oct 9, 2024Updated last year
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆667Feb 5, 2026Updated 3 weeks ago
- Codebase, data and models for the SummaC paper in TACL☆108Jan 30, 2025Updated last year
- Do Large Language Models Know What They Don’t Know?☆102Nov 8, 2024Updated last year
- ☆313Jun 9, 2024Updated last year
- Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).☆406Apr 12, 2024Updated last year
- BERT score for text generation☆1,876Jul 30, 2024Updated last year
- Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models☆812May 21, 2025Updated 9 months ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆571Jan 28, 2025Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆222Aug 10, 2023Updated 2 years ago
- ☆13Aug 26, 2024Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,818Jun 17, 2025Updated 8 months ago
- ☆35Mar 25, 2024Updated last year
- Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper☆309May 1, 2025Updated 10 months ago
- ☆21Aug 19, 2024Updated last year
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆114Jan 6, 2024Updated 2 years ago
- Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (h…☆84Nov 26, 2020Updated 5 years ago
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆341Apr 25, 2024Updated last year