wesg52 / universal-neuronsLinks
Universal Neurons in GPT2 Language Models
☆30Updated last year
Alternatives and similar repositories for universal-neurons
Users that are interested in universal-neurons are comparing it to the libraries listed below
Sorting:
- Sparse Autoencoder Training Library☆54Updated 3 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 8 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆58Updated 9 months ago
- ☆23Updated 6 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆81Updated 9 months ago
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- ☆53Updated last year
- ☆103Updated 6 months ago
- Attribution-based Parameter Decomposition☆28Updated 2 months ago
- ☆28Updated 6 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆13Updated 4 months ago
- A library for efficient patching and automatic circuit discovery.☆76Updated last month
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆124Updated 2 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆206Updated last week
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆18Updated 9 months ago
- ☆34Updated 7 months ago
- nanoGPT-like codebase for LLM training☆102Updated 3 months ago
- Code repo for the model organisms and convergent directions of EM papers.☆21Updated last month
- ☆85Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- ☆90Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆152Updated last month
- ☆23Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆91Updated 9 months ago
- ☆122Updated last year
- Applying SAEs for fine-grained control☆23Updated 8 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 2 years ago
- Multi-Layer Sparse Autoencoders (ICLR 2025)☆24Updated 6 months ago
- ☆20Updated last year