zepingyu0512 / awesome-SAE
awesome SAE papers
☆26Updated 2 months ago
Alternatives and similar repositories for awesome-SAE:
Users that are interested in awesome-SAE are comparing it to the libraries listed below
- ☆14Updated 3 months ago
- ☆34Updated last month
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆30Updated 5 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆75Updated 2 months ago
- ☆17Updated last year
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆22Updated 8 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆89Updated 9 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆76Updated 3 weeks ago
- A curated list of resources for activation engineering☆63Updated 2 weeks ago
- LoFiT: Localized Fine-tuning on LLM Representations☆35Updated 3 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated last month
- ☆57Updated 9 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆53Updated 4 months ago
- A resource repository for representation engineering in large language models☆117Updated 5 months ago
- ☆21Updated 4 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆42Updated 5 months ago
- [ACL'2024 Findings] "Understanding and Patching Compositional Reasoning in LLMs"☆12Updated 7 months ago
- ☆45Updated 10 months ago
- Implementation code for ACL2024:Advancing Parameter Efficiency in Fine-tuning via Representation Editing☆13Updated last year
- ☆50Updated last year
- ☆21Updated last month
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆223Updated last month
- ☆38Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆112Updated 7 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆108Updated 7 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆71Updated last month
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆80Updated 11 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆63Updated 6 months ago
- ☆21Updated 6 months ago