IBM / activation-steering
General-purpose activation steering library
☆49Updated 2 months ago
Alternatives and similar repositories for activation-steering:
Users that are interested in activation-steering are comparing it to the libraries listed below
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆69Updated this week
- Steering Llama 2 with Contrastive Activation Addition☆125Updated 9 months ago
- A resource repository for representation engineering in large language models☆109Updated 3 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- Algebraic value editing in pretrained language models☆63Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆85Updated 2 weeks ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆50Updated 3 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆142Updated 5 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆70Updated last year
- ☆89Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆71Updated 2 weeks ago
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆28Updated 4 months ago
- ☆30Updated 10 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆34Updated last month
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆29Updated 3 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆81Updated 8 months ago
- Weak-to-Strong Jailbreaking on Large Language Models☆72Updated last year
- ☆37Updated last year
- ☆38Updated last year
- ☆80Updated 7 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆63Updated 5 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆95Updated this week
- Implementation of PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆33Updated 4 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆106Updated 11 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 9 months ago
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆21Updated 2 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆77Updated 10 months ago
- ☆20Updated 7 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆73Updated 2 months ago
- ☆53Updated 2 years ago