nrimsky / CAALinks
Steering Llama 2 with Contrastive Activation Addition
☆162Updated last year
Alternatives and similar repositories for CAA
Users that are interested in CAA are comparing it to the libraries listed below
Sorting:
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆95Updated last year
- General-purpose activation steering library☆83Updated 2 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆112Updated 4 months ago
- ☆140Updated 7 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆135Updated this week
- Function Vectors in Large Language Models (ICLR 2024)☆170Updated 2 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆238Updated last month
- Using sparse coding to find distributed representations used by neural networks.☆259Updated last year
- ☆182Updated 3 months ago
- ☆216Updated last year
- A library for efficient patching and automatic circuit discovery.☆70Updated 2 months ago
- ☆231Updated 9 months ago
- Algebraic value editing in pretrained language models☆65Updated last year
- Sparse probing paper full code.☆58Updated last year
- ☆87Updated 11 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆192Updated last week
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆74Updated 4 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆60Updated 7 months ago
- A resource repository for representation engineering in large language models☆127Updated 8 months ago
- Improving Alignment and Robustness with Circuit Breakers☆218Updated 9 months ago
- ☆95Updated last year
- ☆121Updated 11 months ago
- AI Logging for Interpretability and Explainability🔬☆124Updated last year
- ☆105Updated last month
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆101Updated 4 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆56Updated 8 months ago
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆27Updated 11 months ago
- ☆170Updated 7 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆99Updated 2 weeks ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆180Updated 5 months ago