zepingyu0512 / awesome-SAELinks
awesome SAE papers
☆57Updated 5 months ago
Alternatives and similar repositories for awesome-SAE
Users that are interested in awesome-SAE are comparing it to the libraries listed below
Sorting:
- ☆32Updated 5 months ago
- A curated list of resources for activation engineering☆108Updated last month
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆127Updated 3 months ago
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆284Updated 7 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆47Updated 11 months ago
- A resource repository for representation engineering in large language models☆140Updated last year
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆83Updated 10 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆66Updated 11 months ago
- ☆38Updated 11 months ago
- ☆180Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆86Updated last year
- ☆17Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆43Updated 10 months ago
- ☆63Updated 8 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆31Updated 8 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆145Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆86Updated 7 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆163Updated 6 months ago
- This paper list focuses on the theoretical and empirical analysis of language models, especially large language models (LLMs). The papers…☆93Updated 11 months ago
- LLM Unlearning☆177Updated 2 years ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆34Updated 9 months ago
- [ACL'2024 Findings] "Understanding and Patching Compositional Reasoning in LLMs"☆12Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆121Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆59Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- Implementation code for ACL2024:Advancing Parameter Efficiency in Fine-tuning via Representation Editing☆14Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆117Updated last year
- The implement of paper:"ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability"☆49Updated 5 months ago
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆46Updated last month
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆32Updated 10 months ago