dannyallover / overthinking_the_truthLinks
☆29Updated last year
Alternatives and similar repositories for overthinking_the_truth
Users that are interested in overthinking_the_truth are comparing it to the libraries listed below
Sorting:
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆58Updated 6 months ago
- Augmenting Statistical Models with Natural Language Parameters☆26Updated 8 months ago
- ☆41Updated 8 months ago
- ☆40Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆39Updated 4 months ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- ☆74Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆33Updated 9 months ago
- ☆36Updated 2 months ago
- ☆27Updated 2 years ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆34Updated 6 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆28Updated 4 months ago
- Analyzing LLM Alignment via Token distribution shift☆16Updated last year
- Code for ACL 2023 paper "BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases".☆21Updated last year
- ☆41Updated last year
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆13Updated last year
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆24Updated 11 months ago
- ☆24Updated 8 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 2 months ago
- ☆17Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 4 months ago
- ☆10Updated 3 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆25Updated last month
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated last week
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆16Updated last year
- Methods and evaluation for aligning language models temporally☆29Updated last year
- ☆49Updated last year