noanabeshima / matryoshka-saes
☆11Updated 2 months ago
Alternatives and similar repositories for matryoshka-saes:
Users that are interested in matryoshka-saes are comparing it to the libraries listed below
- Sparse Autoencoder Training Library☆39Updated 3 months ago
- A library for efficient patching and automatic circuit discovery.☆48Updated 2 months ago
- ☆45Updated this week
- ☆54Updated 2 months ago
- ☆16Updated 2 weeks ago
- Implementation of the BatchTopK activation function for training sparse autoencoders (SAEs)☆19Updated 4 months ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆18Updated last week
- ☆34Updated last year
- ☆25Updated last week
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated 8 months ago
- Code to reproduce key results accompanying "SAEs (usually) Transfer Between Base and Chat Models"☆10Updated 6 months ago
- ☆17Updated last month
- ☆75Updated 5 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆66Updated 2 months ago
- A TinyStories LM with SAEs and transcoders☆10Updated 3 weeks ago
- ☆12Updated 10 months ago
- Minimum Description Length probing for neural network representations☆18Updated this week
- ☆18Updated last year
- Universal Neurons in GPT2 Language Models☆27Updated 8 months ago
- A library for mechanistic anomaly detection☆17Updated 3 weeks ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆31Updated 3 months ago
- ☆26Updated last year
- ☆19Updated this week
- ☆139Updated this week
- ☆20Updated 4 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆28Updated 7 months ago
- Simple and scalable tools for data-driven pretraining data selection.☆14Updated this week
- ☆15Updated 11 months ago
- Sparse and discrete interpretability tool for neural networks☆59Updated 11 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆36Updated 3 months ago