rmovva / HypotheSAEsLinks
HypotheSAEs: hypothesizing interpretable relationships in text datasets using sparse autoencoders. https://arxiv.org/abs/2502.04382
☆70Updated 3 months ago
Alternatives and similar repositories for HypotheSAEs
Users that are interested in HypotheSAEs are comparing it to the libraries listed below
Sorting:
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆241Updated last week
- ☆64Updated last month
- ☆112Updated 11 months ago
- PAIR.withgoogle.com and friend's work on interpretability methods☆220Updated this week
- A lightweight library for Bayesian analysis of LLM evals (ICML 2025 Spotlight Position Paper)☆21Updated 8 months ago
- Forecasting with LLMs☆55Updated last year
- Discovering Data-driven Hypotheses in the Wild☆128Updated 7 months ago
- ☆88Updated last month
- ☆143Updated last month
- Attribution-based Parameter Decomposition☆33Updated 7 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆227Updated last month
- ☆70Updated 3 weeks ago
- Sparse Autoencoder for Mechanistic Interpretability☆290Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Updated 3 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆140Updated 11 months ago
- Unified access to Large Language Model modules using NNsight☆87Updated last week
- The Prism Alignment Project☆88Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆238Updated last year
- Implementation of the BatchTopK activation function for training sparse autoencoders (SAEs)☆60Updated 6 months ago
- ☆117Updated last year
- Course Materials for Interpretability of Large Language Models (0368.4264) at Tel Aviv University☆297Updated 3 weeks ago
- We develop benchmarks and analysis tools to evaluate the causal reasoning abilities of LLMs.☆137Updated last year
- ☆10Updated last year
- ☆36Updated 2 years ago
- Open source interpretability artefacts for R1.☆170Updated 9 months ago
- ☆267Updated last year
- Data and code for the Corr2Cause paper (ICLR 2024)☆114Updated last year
- ☆132Updated 2 years ago
- ☆83Updated 11 months ago