Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
☆573Jan 28, 2025Updated last year
Alternatives and similar repositories for honest_llama
Users that are interested in honest_llama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆544Jan 17, 2025Updated last year
- ☆251Feb 22, 2024Updated 2 years ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆868Mar 6, 2026Updated 3 weeks ago
- Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"☆143Mar 26, 2024Updated 2 years ago
- Algebraic value editing in pretrained language models☆69Nov 1, 2023Updated 2 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆282Mar 2, 2024Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆567Feb 12, 2024Updated 2 years ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆969Aug 14, 2024Updated last year
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆35Jan 31, 2025Updated last year
- ☆58Jun 30, 2023Updated 2 years ago
- Steering Llama 2 with Contrastive Activation Addition☆220May 23, 2024Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Feb 27, 2024Updated 2 years ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆896Jan 16, 2025Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,077Sep 27, 2025Updated 6 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Mar 30, 2024Updated 2 years ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆103Sep 21, 2023Updated 2 years ago
- Tools for understanding how transformer predictions are built layer-by-layer☆578Aug 7, 2025Updated 7 months ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆425Apr 13, 2025Updated 11 months ago
- Locating and editing factual associations in GPT (NeurIPS 2022)☆737Apr 20, 2024Updated last year
- contrastive decoding☆206Nov 14, 2022Updated 3 years ago
- Using sparse coding to find distributed representations used by neural networks.☆298Nov 10, 2023Updated 2 years ago
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆44Jul 28, 2024Updated last year
- ☆104Aug 8, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆89Nov 11, 2022Updated 3 years ago
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆107Nov 23, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆198Feb 13, 2025Updated last year
- A resource repository for representation engineering in large language models☆149Nov 14, 2024Updated last year
- ☆278Oct 1, 2024Updated last year
- ☆60Nov 18, 2024Updated last year
- List of papers on hallucination detection in LLMs.☆1,062Mar 22, 2026Updated last week
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- LoFiT: Localized Fine-tuning on LLM Representations☆44Jan 15, 2025Updated last year
- A library for mechanistic interpretability of GPT-style language models☆3,239Updated this week
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Mar 7, 2025Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆141Feb 21, 2025Updated last year
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated 2 months ago
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago