RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
☆424May 16, 2025Updated 11 months ago
Alternatives and similar repositories for RefChecker
Users that are interested in RefChecker are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆23Feb 3, 2024Updated 2 years ago
- RAGChecker: A Fine-grained Framework For Diagnosing RAG☆1,080Dec 13, 2024Updated last year
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆610Jun 26, 2024Updated last year
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆161Mar 11, 2024Updated 2 years ago
- List of papers on hallucination detection in LLMs.☆1,079Apr 23, 2026Updated last week
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆241Dec 2, 2024Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆64Dec 25, 2023Updated 2 years ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆434Apr 13, 2025Updated last year
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆115Jan 6, 2024Updated 2 years ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Feb 15, 2024Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆577Feb 12, 2024Updated 2 years ago
- ☆224Apr 2, 2025Updated last year
- ☆32May 10, 2024Updated last year
- A flipped classroom series on understanding LLMs for non-CS/AI students☆40Jun 5, 2025Updated 10 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆341Mar 28, 2026Updated last month
- ☆76Feb 16, 2024Updated 2 years ago
- ACL 2021☆27May 24, 2022Updated 3 years ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆139Jun 5, 2024Updated last year
- Supercharge Your LLM Application Evaluations 🚀☆13,709Feb 24, 2026Updated 2 months ago
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆684Apr 18, 2026Updated last week
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- Automated Evaluation of RAG Systems☆707Mar 28, 2025Updated last year
- FacTool: Factuality Detection in Generative AI☆925Aug 19, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Metrics to evaluate the quality of responses of your Retrieval Augmented Generation (RAG) applications.☆324Jul 10, 2025Updated 9 months ago
- [IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection☆90Apr 28, 2024Updated 2 years ago
- Thin wrapper for the AllenNLP's implementation of supervised open information extraction☆17Nov 19, 2019Updated 6 years ago
- ☆10Nov 29, 2024Updated last year
- An Easy-to-use Hallucination Detection Framework for LLMs.☆63Apr 21, 2024Updated 2 years ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Feb 26, 2024Updated 2 years ago
- Code for "FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge". EMNLP 2023.☆20Dec 25, 2023Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,082Sep 27, 2025Updated 7 months ago
- FactCG: Enhancing Fact Checkers with Graph-Based Multi-Hop Data (NAACL 2025)☆17Jul 14, 2025Updated 9 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- LLM hallucination paper list☆334Mar 11, 2024Updated 2 years ago
- We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in …☆54Jul 28, 2023Updated 2 years ago
- RARR: Researching and Revising What Language Models Say, Using Language Models☆53Jun 22, 2023Updated 2 years ago
- Resources for Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022.☆24Nov 23, 2022Updated 3 years ago
- ☆12Sep 23, 2024Updated last year
- ☆19Dec 8, 2022Updated 3 years ago
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆513Oct 9, 2024Updated last year