TruthfulQA: Measuring How Models Imitate Human Falsehoods
☆908Jan 16, 2025Updated last year
Alternatives and similar repositories for TruthfulQA
Users that are interested in TruthfulQA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆576Feb 12, 2024Updated 2 years ago
- ☆90Nov 11, 2022Updated 3 years ago
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆139Jun 5, 2024Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆573Jan 28, 2025Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆434Apr 13, 2025Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,577May 28, 2023Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,082Sep 27, 2025Updated 7 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆551Jan 17, 2025Updated last year
- A framework for few-shot evaluation of language models.☆12,331Apr 22, 2026Updated last week
- ☆231Feb 23, 2021Updated 5 years ago
- ☆12Mar 7, 2024Updated 2 years ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,233Jul 19, 2024Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,840Jun 17, 2025Updated 10 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,976Aug 9, 2025Updated 8 months ago
- Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"☆144Mar 26, 2024Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆98Jul 25, 2023Updated 2 years ago
- Teaching Models to Express Their Uncertainty in Words☆38May 26, 2022Updated 3 years ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆610Jun 26, 2024Updated last year
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,212Jan 17, 2025Updated last year
- ☆283Mar 2, 2024Updated 2 years ago
- ☆772Jun 13, 2024Updated last year
- Representation Engineering: A Top-Down Approach to AI Transparency☆986Aug 14, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆1,423Jan 21, 2024Updated 2 years ago
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆622Jun 24, 2025Updated 10 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆555Jun 25, 2024Updated last year
- Train transformer language models with reinforcement learning.☆18,193Updated this week
- This repo contains the code for generating the ToxiGen dataset, published at ACL 2022.☆345Jun 17, 2024Updated last year
- A prize for finding tasks that cause large language models to show inverse scaling☆619Oct 11, 2023Updated 2 years ago
- BeHonest: Benchmarking Honesty in Large Language Models☆35Aug 15, 2024Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,765Updated this week
- Do Large Language Models Know What They Don’t Know?☆103Nov 8, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- List of papers on hallucination detection in LLMs.☆1,079Apr 23, 2026Updated last week
- The MATH Dataset (NeurIPS 2021)☆1,346Sep 6, 2025Updated 7 months ago
- ☆21Aug 19, 2024Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,770Aug 4, 2024Updated last year
- Locating and editing factual associations in GPT (NeurIPS 2022)☆748Apr 20, 2024Updated 2 years ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Feb 27, 2024Updated 2 years ago
- AllenAI's post-training codebase☆3,702Updated this week