Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
☆573Jan 28, 2025Updated last year
Alternatives and similar repositories for honest_llama
Users that are interested in honest_llama are comparing it to the libraries listed below
Sorting:
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆544Jan 17, 2025Updated last year
- ☆250Feb 22, 2024Updated 2 years ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆866Updated this week
- Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"☆143Mar 26, 2024Updated last year
- Algebraic value editing in pretrained language models☆69Nov 1, 2023Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆556Feb 12, 2024Updated 2 years ago
- ☆284Mar 2, 2024Updated 2 years ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆957Aug 14, 2024Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Feb 27, 2024Updated 2 years ago
- Steering Llama 2 with Contrastive Activation Addition☆213May 23, 2024Updated last year
- ☆58Jun 30, 2023Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,076Sep 27, 2025Updated 5 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆102Sep 21, 2023Updated 2 years ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆890Jan 16, 2025Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Mar 30, 2024Updated last year
- ☆102Aug 8, 2024Updated last year
- Locating and editing factual associations in GPT (NeurIPS 2022)☆734Apr 20, 2024Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75May 20, 2025Updated 9 months ago
- contrastive decoding☆207Nov 14, 2022Updated 3 years ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆35Jan 31, 2025Updated last year
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆419Apr 13, 2025Updated 10 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Mar 7, 2025Updated last year
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆543Jan 31, 2024Updated 2 years ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆198Feb 13, 2025Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆570Aug 7, 2025Updated 7 months ago
- Using sparse coding to find distributed representations used by neural networks.☆297Nov 10, 2023Updated 2 years ago
- List of papers on hallucination detection in LLMs.☆1,055Jan 11, 2026Updated last month
- ☆89Nov 11, 2022Updated 3 years ago
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- Measuring and Controlling Persona Drift in Language Model Dialogs☆22Feb 26, 2024Updated 2 years ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆121Aug 16, 2023Updated 2 years ago
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,562Updated this week
- RewardBench: the first evaluation tool for reward models.☆702Feb 16, 2026Updated 3 weeks ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆589Dec 9, 2024Updated last year
- A framework for few-shot evaluation of language models.☆11,618Updated this week
- [NIPS2023] RRHF & Wombat☆809Sep 22, 2023Updated 2 years ago
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago