zepingyu0512 / awesome-LLM-neuronLinks
☆36Updated 7 months ago
Alternatives and similar repositories for awesome-LLM-neuron
Users that are interested in awesome-LLM-neuron are comparing it to the libraries listed below
Sorting:
- awesome SAE papers☆71Updated 8 months ago
- A curated list of resources for activation engineering☆121Updated 3 months ago
- A resource repository for representation engineering in large language models☆148Updated last year
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆291Updated last week
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆153Updated 5 months ago
- LLM Unlearning☆181Updated 2 years ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆170Updated 9 months ago
- ☆55Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆49Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Updated 10 months ago
- awesome papers in LLM interpretability☆607Updated 5 months ago
- ☆17Updated last year
- ☆63Updated 8 months ago
- ☆41Updated last year
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆41Updated last year
- Awesome-Parallel-Reasoning: Unlocking the reasoning potential of LLMs. Papers, Code, Resources & Survey.☆45Updated 3 weeks ago
- [ACL'2024 Findings] "Understanding and Patching Compositional Reasoning in LLMs"☆13Updated last year
- ☆32Updated 10 months ago
- A survey on harmful fine-tuning attack for large language model☆231Updated 3 weeks ago
- ☆72Updated last year
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆173Updated last year
- This paper list focuses on the theoretical and empirical analysis of language models, especially large language models (LLMs). The papers…☆98Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆151Updated last year
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆28Updated 10 months ago
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆49Updated 3 months ago
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆59Updated 4 months ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆34Updated last year
- FeatureAlignment = Alignment + Mechanistic Interpretability☆34Updated 10 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)