wesg52 / universal-neuronsLinks
Universal Neurons in GPT2 Language Models
☆29Updated last year
Alternatives and similar repositories for universal-neurons
Users that are interested in universal-neurons are comparing it to the libraries listed below
Sorting:
- Sparse Autoencoder Training Library☆50Updated last month
- ☆23Updated 4 months ago
- ☆52Updated last year
- A library for efficient patching and automatic circuit discovery.☆65Updated last month
- A small package implementing some useful wrapping around nnsight☆13Updated last month
- Attribution-based Parameter Decomposition☆23Updated this week
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆89Updated last week
- Open source replication of Anthropic's Crosscoders for Model Diffing☆55Updated 7 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆73Updated 7 months ago
- ☆25Updated 3 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆25Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated last year
- ☆83Updated 9 months ago
- ☆45Updated last year
- ☆21Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 6 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆34Updated last year
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆27Updated 2 years ago
- ☆31Updated last year
- ☆14Updated last year
- ☆12Updated 3 months ago
- ☆32Updated 4 months ago
- ☆26Updated last year
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- ☆19Updated 10 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆68Updated 4 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- ☆28Updated last year
- ☆16Updated 6 months ago
- Applying SAEs for fine-grained control☆18Updated 5 months ago