EleutherAI / concept-erasure
Erasing concepts from neural representations with provable guarantees
☆223Updated last month
Alternatives and similar repositories for concept-erasure:
Users that are interested in concept-erasure are comparing it to the libraries listed below
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆197Updated this week
- Extract full next-token probabilities via language model APIs☆230Updated last year
- Mechanistic Interpretability Visualizations using React☆235Updated 2 months ago
- ☆120Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆183Updated 2 months ago
- ☆207Updated 5 months ago
- ☆153Updated this week
- Steering vectors for transformer language models in Pytorch / Huggingface☆88Updated last week
- ☆262Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆477Updated 9 months ago
- ☆84Updated 2 weeks ago
- ☆122Updated 2 weeks ago
- ☆57Updated 3 months ago
- Utilities for the HuggingFace transformers library☆64Updated 2 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- git extension for {collaborative, communal, continual} model development☆208Updated 3 months ago
- Steering Llama 2 with Contrastive Activation Addition☆124Updated 9 months ago
- Sparsify transformers with SAEs and transcoders☆476Updated this week
- ☆110Updated 6 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆186Updated 9 months ago
- Mechanistic Interpretability for Transformer Models☆49Updated 2 years ago
- A library for efficient patching and automatic circuit discovery.☆54Updated 2 weeks ago
- ☆78Updated 8 months ago
- ☆255Updated 8 months ago
- ☆190Updated last year
- ☆58Updated this week
- Sparse Autoencoder for Mechanistic Interpretability☆217Updated 7 months ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆176Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆114Updated 2 years ago
- Using sparse coding to find distributed representations used by neural networks.☆217Updated last year