nrimsky / CAALinks
Steering Llama 2 with Contrastive Activation Addition
☆192Updated last year
Alternatives and similar repositories for CAA
Users that are interested in CAA are comparing it to the libraries listed below
Sorting:
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆98Updated 2 years ago
- [ICLR 2025] General-purpose activation steering library☆116Updated last month
- ☆184Updated 11 months ago
- ☆192Updated 3 weeks ago
- ☆237Updated last year
- Performant framework for training, analyzing and visualizing Sparse Autoencoders (SAEs) and their frontier variants.☆163Updated this week
- Using sparse coding to find distributed representations used by neural networks.☆283Updated 2 years ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆128Updated 8 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆183Updated 6 months ago
- ☆92Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆140Updated 4 months ago
- ☆133Updated 3 weeks ago
- Sparse probing paper full code.☆65Updated last year
- Algebraic value editing in pretrained language models☆66Updated 2 years ago
- ☆58Updated 3 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆222Updated this week
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆296Updated 5 months ago
- A resource repository for representation engineering in large language models☆140Updated 11 months ago
- A library for efficient patching and automatic circuit discovery.☆79Updated 3 months ago
- ☆101Updated 2 years ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆66Updated 11 months ago
- ☆129Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆83Updated 8 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆116Updated 8 months ago
- ☆252Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆56Updated 2 weeks ago
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆34Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆59Updated last year
- AI Logging for Interpretability and Explainability🔬☆133Updated last year
- ☆56Updated 11 months ago