zepingyu0512 / neuron-attributionLinks
code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models
☆35Updated 7 months ago
Alternatives and similar repositories for neuron-attribution
Users that are interested in neuron-attribution are comparing it to the libraries listed below
Sorting:
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆60Updated 7 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- awesome SAE papers☆35Updated last month
- LoFiT: Localized Fine-tuning on LLM Representations☆39Updated 5 months ago
- ☆44Updated 3 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆69Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆112Updated 9 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆76Updated 8 months ago
- ☆44Updated 7 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated 10 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆112Updated last year
- ☆29Updated 6 months ago
- ☆29Updated last year
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆89Updated 4 months ago
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆26Updated 10 months ago
- ☆59Updated 11 months ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆25Updated last year
- AbstainQA, ACL 2024☆26Updated 8 months ago
- ☆24Updated 2 years ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆65Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- ☆44Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆28Updated 2 months ago
- ☆11Updated 4 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 3 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- ☆22Updated 3 months ago
- ☆41Updated 8 months ago
- ☆19Updated 4 months ago