dannyallover / overthinking_the_truth
☆29Updated 11 months ago
Alternatives and similar repositories for overthinking_the_truth:
Users that are interested in overthinking_the_truth are comparing it to the libraries listed below
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆53Updated 5 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆37Updated 3 months ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- ☆25Updated 2 years ago
- Augmenting Statistical Models with Natural Language Parameters☆26Updated 7 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆67Updated 2 years ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆23Updated 9 months ago
- ☆40Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆58Updated last year
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆13Updated last year
- Methods and evaluation for aligning language models temporally☆29Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆30Updated 5 months ago
- ☆37Updated last year
- AbstainQA, ACL 2024☆25Updated 6 months ago
- Active Example Selection for In-Context Learning (EMNLP'22)☆49Updated 9 months ago
- ☆73Updated 11 months ago
- ☆49Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆109Updated 7 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 8 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 3 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated last month
- ☆17Updated last year
- ☆27Updated last month
- ☆35Updated 6 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- ☆22Updated 6 months ago
- General-purpose activation steering library☆61Updated 3 months ago
- Analyzing LLM Alignment via Token distribution shift☆16Updated last year
- ☆41Updated last year
- ☆10Updated 2 months ago