likenneth / honest_llamaView external linksLinks
Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
☆570Jan 28, 2025Updated last year
Alternatives and similar repositories for honest_llama
Users that are interested in honest_llama are comparing it to the libraries listed below
Sorting:
- ☆247Feb 22, 2024Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆537Jan 17, 2025Updated last year
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆858Jan 29, 2026Updated 2 weeks ago
- Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"☆144Mar 26, 2024Updated last year
- Algebraic value editing in pretrained language models☆68Nov 1, 2023Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆554Feb 12, 2024Updated 2 years ago
- ☆284Mar 2, 2024Updated last year
- Representation Engineering: A Top-Down Approach to AI Transparency☆946Aug 14, 2024Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Feb 27, 2024Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆209May 23, 2024Updated last year
- ☆58Jun 30, 2023Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,076Sep 27, 2025Updated 4 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆100Sep 21, 2023Updated 2 years ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆884Jan 16, 2025Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Mar 30, 2024Updated last year
- ☆99Aug 8, 2024Updated last year
- Locating and editing factual associations in GPT (NeurIPS 2022)☆727Apr 20, 2024Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75May 20, 2025Updated 8 months ago
- contrastive decoding☆207Nov 14, 2022Updated 3 years ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆35Jan 31, 2025Updated last year
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆415Apr 13, 2025Updated 10 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Mar 7, 2025Updated 11 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆540Jan 31, 2024Updated 2 years ago
- Tools for understanding how transformer predictions are built layer-by-layer☆567Aug 7, 2025Updated 6 months ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆198Feb 13, 2025Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆296Nov 10, 2023Updated 2 years ago
- ☆89Nov 11, 2022Updated 3 years ago
- List of papers on hallucination detection in LLMs.☆1,046Jan 11, 2026Updated last month
- Measuring and Controlling Persona Drift in Language Model Dialogs☆21Feb 26, 2024Updated last year
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆121Aug 16, 2023Updated 2 years ago
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,555Jan 14, 2026Updated last month
- RewardBench: the first evaluation tool for reward models.☆687Jan 31, 2026Updated 2 weeks ago
- A framework for few-shot evaluation of language models.☆11,393Feb 11, 2026Updated last week
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆588Dec 9, 2024Updated last year
- [NIPS2023] RRHF & Wombat☆809Sep 22, 2023Updated 2 years ago
- A library for mechanistic interpretability of GPT-style language models☆3,073Feb 11, 2026Updated last week