This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.
☆578Feb 12, 2024Updated 2 years ago
Alternatives and similar repositories for HaluEval
Users that are interested in HaluEval are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆50Jan 7, 2024Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,082Sep 27, 2025Updated 7 months ago
- List of papers on hallucination detection in LLMs.☆1,080Apr 23, 2026Updated last week
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆434Apr 13, 2025Updated last year
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆610Jun 26, 2024Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆909Jan 16, 2025Updated last year
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆64Dec 25, 2023Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆98Jul 25, 2023Updated 2 years ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆575Jan 28, 2025Updated last year
- ☆90Nov 11, 2022Updated 3 years ago
- FacTool: Factuality Detection in Generative AI☆926Aug 19, 2024Updated last year
- Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"☆82Jul 31, 2023Updated 2 years ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Mar 30, 2024Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆43Sep 3, 2024Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆554Jan 17, 2025Updated last year
- [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.☆180Jun 7, 2025Updated 10 months ago
- This is the code for the paper "Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation".☆37Apr 15, 2026Updated 3 weeks ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆242Dec 2, 2024Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆139Jun 5, 2024Updated last year
- Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper☆308May 1, 2025Updated last year
- Do Large Language Models Know What They Don’t Know?☆103Nov 8, 2024Updated last year
- ☆58Jun 30, 2023Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Official code for our paper "Reasoning Models Hallucinate More: Factuality-Aware Reinforcement Learning for Large Reasoning Models"☆24Oct 31, 2025Updated 6 months ago
- ☆284Jan 6, 2025Updated last year
- LLM hallucination paper list☆334Mar 11, 2024Updated 2 years ago
- Paper List for In-context Learning 🌷☆874Oct 8, 2024Updated last year
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆514Oct 9, 2024Updated last year
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆570Oct 28, 2024Updated last year
- A Bilingual Role Evaluation Benchmark for Large Language Models☆43Jan 9, 2024Updated 2 years ago
- [ICLR24] The open-source repo of THU-KEG's KoLA benchmark.☆56Sep 28, 2023Updated 2 years ago
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆162Mar 11, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆16Sep 27, 2023Updated 2 years ago
- [ACL 23] CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors☆40Dec 14, 2025Updated 4 months ago
- RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Langua…☆424May 16, 2025Updated 11 months ago
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,104Oct 5, 2023Updated 2 years ago
- Code for "FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge". EMNLP 2023.☆20Dec 25, 2023Updated 2 years ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,580May 28, 2023Updated 2 years ago
- Collection of papers for scalable automated alignment.☆93Oct 22, 2024Updated last year