dannyallover / overthinking_the_truthLinks
☆29Updated last year
Alternatives and similar repositories for overthinking_the_truth
Users that are interested in overthinking_the_truth are comparing it to the libraries listed below
Sorting:
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆64Updated 10 months ago
- ☆97Updated last year
- Augmenting Statistical Models with Natural Language Parameters☆28Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆70Updated 2 years ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆80Updated 7 months ago
- ☆45Updated last year
- ☆44Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆116Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- Analyzing LLM Alignment via Token distribution shift☆16Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 8 months ago
- ☆51Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆41Updated 8 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆43Updated 10 months ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- ☆52Updated 6 months ago
- ☆27Updated 2 years ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆81Updated 9 months ago
- ☆41Updated last year
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆35Updated 2 years ago
- ☆56Updated 2 years ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- Methods and evaluation for aligning language models temporally☆30Updated last year
- ☆75Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆120Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- ☆25Updated 3 months ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆26Updated last year