zepingyu0512 / awesome-SAE
awesome SAE papers
☆18Updated last month
Alternatives and similar repositories for awesome-SAE:
Users that are interested in awesome-SAE are comparing it to the libraries listed below
- The official repo of paper "Self-Control of LLM Behaviors by Compressing Suffix Gradient into Prefix Controller"☆18Updated 6 months ago
- ☆153Updated 7 months ago
- A resource repository for representation engineering in large language models☆101Updated 3 months ago
- LLM Unlearning☆140Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆77Updated 9 months ago
- ☆14Updated 11 months ago
- ☆24Updated last year
- ☆12Updated 5 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆28Updated 3 months ago
- ☆21Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆71Updated last month
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆23Updated 7 months ago
- ☆11Updated 2 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆106Updated 5 months ago
- [AAAI 2024] History Matters: Temporal Knowledge Editing in Large Language Model☆12Updated last year
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆163Updated 9 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆45Updated 5 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆103Updated 4 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆32Updated last month
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆84Updated 5 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆60Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆31Updated last month
- A survey on harmful fine-tuning attack for large language model☆135Updated this week
- ☆15Updated 8 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆71Updated 7 months ago
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆16Updated 2 months ago