This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.
☆567Feb 12, 2024Updated 2 years ago
Alternatives and similar repositories for HaluEval
Users that are interested in HaluEval are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆49Jan 7, 2024Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,077Sep 27, 2025Updated 6 months ago
- List of papers on hallucination detection in LLMs.☆1,062Updated this week
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆423Apr 13, 2025Updated 11 months ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆606Jun 26, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆894Jan 16, 2025Updated last year
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆64Dec 25, 2023Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆97Jul 25, 2023Updated 2 years ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆573Jan 28, 2025Updated last year
- ☆89Nov 11, 2022Updated 3 years ago
- FacTool: Factuality Detection in Generative AI☆918Aug 19, 2024Updated last year
- Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"☆82Jul 31, 2023Updated 2 years ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Mar 30, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆544Jan 17, 2025Updated last year
- ☆43Sep 3, 2024Updated last year
- [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.☆180Jun 7, 2025Updated 9 months ago
- This is the code for the paper "Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation".☆37Sep 1, 2025Updated 6 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆235Dec 2, 2024Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆137Jun 5, 2024Updated last year
- Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper☆308May 1, 2025Updated 10 months ago
- Do Large Language Models Know What They Don’t Know?☆102Nov 8, 2024Updated last year
- ☆58Jun 30, 2023Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Official code for our paper "Reasoning Models Hallucinate More: Factuality-Aware Reinforcement Learning for Large Reasoning Models"☆22Oct 31, 2025Updated 4 months ago
- ☆284Jan 6, 2025Updated last year
- LLM hallucination paper list☆332Mar 11, 2024Updated 2 years ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Aug 15, 2024Updated last year
- Paper List for In-context Learning 🌷☆873Oct 8, 2024Updated last year
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆512Oct 9, 2024Updated last year
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆569Oct 28, 2024Updated last year
- A Bilingual Role Evaluation Benchmark for Large Language Models☆43Jan 9, 2024Updated 2 years ago
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆156Mar 11, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [ICLR24] The open-source repo of THU-KEG's KoLA benchmark.☆54Sep 28, 2023Updated 2 years ago
- ☆16Sep 27, 2023Updated 2 years ago
- [ACL 23] CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors☆40Dec 14, 2025Updated 3 months ago
- RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Langua…☆417May 16, 2025Updated 10 months ago
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,101Oct 5, 2023Updated 2 years ago
- Code for "FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge". EMNLP 2023.☆20Dec 25, 2023Updated 2 years ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,569May 28, 2023Updated 2 years ago