annahdo / implementing_activation_steering
A collection of different ways to implement accessing and modifying internal model activations for LLMs
☆11Updated 4 months ago
Alternatives and similar repositories for implementing_activation_steering:
Users that are interested in implementing_activation_steering are comparing it to the libraries listed below
- Experiments with representation engineering☆11Updated 11 months ago
- ☆29Updated 9 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆88Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆39Updated 3 months ago
- ☆52Updated this week
- Steering Llama 2 with Contrastive Activation Addition☆123Updated 8 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆88Updated this week
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆15Updated 3 months ago
- A library for efficient patching and automatic circuit discovery.☆53Updated this week
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆64Updated 8 months ago
- ☆116Updated last year
- ☆205Updated 4 months ago
- ☆55Updated 3 months ago
- ☆151Updated this week
- Algebraic value editing in pretrained language models☆62Updated last year
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆71Updated last year
- ☆31Updated last week
- Measuring the situational awareness of language models☆34Updated last year
- Sparse Autoencoder Training Library☆41Updated 3 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆195Updated last week
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Utilities for the HuggingFace transformers library☆64Updated 2 years ago
- Mechanistic Interpretability Visualizations using React☆232Updated 2 months ago
- Erasing concepts from neural representations with provable guarantees☆222Updated 3 weeks ago
- ☆10Updated 7 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆29Updated 8 months ago
- Redwood Research's transformer interpretability tools☆14Updated 2 years ago
- Mechanistic Interpretability for Transformer Models☆49Updated 2 years ago