ckkissane / sae-transferLinks
Code to reproduce key results accompanying "SAEs (usually) Transfer Between Base and Chat Models"
☆12Updated last year
Alternatives and similar repositories for sae-transfer
Users that are interested in sae-transfer are comparing it to the libraries listed below
Sorting:
- Sparse Autoencoder Training Library☆53Updated 2 months ago
- ☆23Updated 5 months ago
- A library for efficient patching and automatic circuit discovery.☆70Updated 2 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆49Updated 9 months ago
- Algebraic value editing in pretrained language models☆65Updated last year
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆18Updated 6 months ago
- ☆15Updated last year
- Unified access to Large Language Model modules using NNsight☆21Updated this week
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 7 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆57Updated 8 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆31Updated 5 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated 2 months ago
- ☆46Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 5 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆56Updated last year
- ☆10Updated last month
- ☆87Updated 11 months ago
- The repository contains code for Adaptive Data Optimization☆25Updated 7 months ago
- ☆23Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆102Updated 3 weeks ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆11Updated 3 months ago
- https://footprints.baulab.info☆17Updated 9 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆33Updated 2 months ago
- ☆50Updated last year
- ☆19Updated 7 months ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆59Updated last week
- ☆27Updated 5 months ago