RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
☆417May 16, 2025Updated 10 months ago
Alternatives and similar repositories for RefChecker
Users that are interested in RefChecker are comparing it to the libraries listed below
Sorting:
- ☆22Feb 3, 2024Updated 2 years ago
- RAGChecker: A Fine-grained Framework For Diagnosing RAG☆1,065Dec 13, 2024Updated last year
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆605Jun 26, 2024Updated last year
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆155Mar 11, 2024Updated 2 years ago
- List of papers on hallucination detection in LLMs.☆1,060Jan 11, 2026Updated 2 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆233Dec 2, 2024Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆64Dec 25, 2023Updated 2 years ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆420Apr 13, 2025Updated 11 months ago
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆114Jan 6, 2024Updated 2 years ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Feb 15, 2024Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆567Feb 12, 2024Updated 2 years ago
- ☆220Apr 2, 2025Updated 11 months ago
- ☆32May 10, 2024Updated last year
- A flipped classroom series on understanding LLMs for non-CS/AI students☆39Jun 5, 2025Updated 9 months ago
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆341Apr 25, 2024Updated last year
- ACL 2021☆27May 24, 2022Updated 3 years ago
- This is the code for the paper "Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation".☆37Sep 1, 2025Updated 6 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆136Jun 5, 2024Updated last year
- Supercharge Your LLM Application Evaluations 🚀☆13,008Feb 24, 2026Updated 3 weeks ago
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆673Updated this week
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- FacTool: Factuality Detection in Generative AI☆916Aug 19, 2024Updated last year
- ☆12Nov 5, 2025Updated 4 months ago
- Metrics to evaluate the quality of responses of your Retrieval Augmented Generation (RAG) applications.☆325Jul 10, 2025Updated 8 months ago
- Thin wrapper for the AllenNLP's implementation of supervised open information extraction☆17Nov 19, 2019Updated 6 years ago
- Automated Evaluation of RAG Systems☆697Mar 28, 2025Updated 11 months ago
- ☆10Nov 29, 2024Updated last year
- An Easy-to-use Hallucination Detection Framework for LLMs.☆63Apr 21, 2024Updated last year
- FactCG: Enhancing Fact Checkers with Graph-Based Multi-Hop Data (NAACL 2025)☆15Jul 14, 2025Updated 8 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Feb 26, 2024Updated 2 years ago
- Code for "FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge". EMNLP 2023.☆20Dec 25, 2023Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,078Sep 27, 2025Updated 5 months ago
- LLM hallucination paper list☆331Mar 11, 2024Updated 2 years ago
- The official code of our paper at EMNLP 2022: Back to the Future: Bidirectional Information Decoupling Network for Multi-turn Dialogue Mo…☆16Feb 17, 2023Updated 3 years ago
- Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency☆39Jan 18, 2025Updated last year
- We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in …☆54Jul 28, 2023Updated 2 years ago
- RARR: Researching and Revising What Language Models Say, Using Language Models☆53Jun 22, 2023Updated 2 years ago
- Resources for Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022.☆24Nov 23, 2022Updated 3 years ago
- ☆12Sep 23, 2024Updated last year