zepingyu0512 / awesome-LLM-neuron
☆14Updated 3 months ago
Alternatives and similar repositories for awesome-LLM-neuron:
Users that are interested in awesome-LLM-neuron are comparing it to the libraries listed below
- awesome SAE papers☆26Updated 2 months ago
- A resource repository for representation engineering in large language models☆119Updated 5 months ago
- ☆34Updated last month
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated last month
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆223Updated last month
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆75Updated 2 months ago
- This is the repository for our paper: Untying the Reversal Curse via Bidirectional Language Model Editing☆10Updated last year
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆22Updated 8 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆30Updated 5 months ago
- ☆41Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆76Updated 3 weeks ago
- ☆38Updated last year
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆168Updated 11 months ago
- ☆21Updated 6 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated last month
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆89Updated 9 months ago
- ☆21Updated 4 months ago
- ☆46Updated 10 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆109Updated 2 weeks ago
- ☆57Updated 9 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆109Updated 7 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆144Updated 11 months ago
- [ACL'2024 Findings] "Understanding and Patching Compositional Reasoning in LLMs"☆12Updated 7 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆53Updated 5 months ago
- ☆48Updated last year
- ☆164Updated 10 months ago
- Using sparse coding to find distributed representations used by neural networks.☆236Updated last year
- ☆131Updated last year
- [NAACL 25 Demo] TrustEval: A modular and extensible toolkit for comprehensive trust evaluation of generative foundation models (GenFMs)☆97Updated 2 weeks ago