TruthfulQA: Measuring How Models Imitate Human Falsehoods
☆899Jan 16, 2025Updated last year
Alternatives and similar repositories for TruthfulQA
Users that are interested in TruthfulQA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆568Feb 12, 2024Updated 2 years ago
- ☆89Nov 11, 2022Updated 3 years ago
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆138Jun 5, 2024Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆574Jan 28, 2025Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆425Apr 13, 2025Updated 11 months ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,569May 28, 2023Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,079Sep 27, 2025Updated 6 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆546Jan 17, 2025Updated last year
- A framework for few-shot evaluation of language models.☆12,020Apr 1, 2026Updated last week
- ☆230Feb 23, 2021Updated 5 years ago
- ☆12Mar 7, 2024Updated 2 years ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,834Jun 17, 2025Updated 9 months ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,222Jul 19, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,964Aug 9, 2025Updated 8 months ago
- Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"☆143Mar 26, 2024Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆97Jul 25, 2023Updated 2 years ago
- Teaching Models to Express Their Uncertainty in Words☆38May 26, 2022Updated 3 years ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆609Jun 26, 2024Updated last year
- Code for the paper "Evaluating Large Language Models Trained on Code"☆3,188Jan 17, 2025Updated last year
- ☆282Mar 2, 2024Updated 2 years ago
- ☆771Jun 13, 2024Updated last year
- Representation Engineering: A Top-Down Approach to AI Transparency☆973Aug 14, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆1,414Jan 21, 2024Updated 2 years ago
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆621Jun 24, 2025Updated 9 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆553Jun 25, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,967Updated this week
- This repo contains the code for generating the ToxiGen dataset, published at ACL 2022.☆344Jun 17, 2024Updated last year
- A prize for finding tasks that cause large language models to show inverse scaling☆618Oct 11, 2023Updated 2 years ago
- BeHonest: Benchmarking Honesty in Large Language Models☆35Aug 15, 2024Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,735Updated this week
- List of papers on hallucination detection in LLMs.☆1,067Updated this week
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Do Large Language Models Know What They Don’t Know?☆103Nov 8, 2024Updated last year
- The MATH Dataset (NeurIPS 2021)☆1,340Sep 6, 2025Updated 7 months ago
- ☆21Aug 19, 2024Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,770Aug 4, 2024Updated last year
- Locating and editing factual associations in GPT (NeurIPS 2022)☆738Apr 20, 2024Updated last year
- AllenAI's post-training codebase☆3,677Updated this week
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Feb 27, 2024Updated 2 years ago