zepingyu0512 / awesome-LLM-neuronLinks
☆26Updated 2 months ago
Alternatives and similar repositories for awesome-LLM-neuron
Users that are interested in awesome-LLM-neuron are comparing it to the libraries listed below
Sorting:
- awesome SAE papers☆43Updated 3 months ago
- A resource repository for representation engineering in large language models☆131Updated 9 months ago
- A curated list of resources for activation engineering☆101Updated 3 months ago
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆265Updated 5 months ago
- ☆42Updated 3 months ago
- LLM Unlearning☆174Updated last year
- ☆51Updated last year
- awesome papers in LLM interpretability☆536Updated last week
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆82Updated 5 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆99Updated 2 weeks ago
- ☆17Updated last year
- ☆32Updated 8 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆146Updated 4 months ago
- [ACL'2024 Findings] "Understanding and Patching Compositional Reasoning in LLMs"☆12Updated last year
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆31Updated last year
- A survey on harmful fine-tuning attack for large language model☆205Updated this week
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆94Updated 3 months ago
- ☆157Updated 11 months ago
- ☆28Updated last year
- ☆48Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆41Updated 9 months ago
- ☆21Updated 5 months ago
- ☆59Updated last year
- LLM hallucination paper list☆322Updated last year
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆45Updated 9 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆87Updated last year
- ☆38Updated last year
- A resource repository for machine unlearning in large language models☆473Updated last month
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆25Updated 5 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆76Updated 8 months ago