wattenberg / superpositionLinks
Code associated to papers on superposition (in ML interpretability)
☆35Updated 3 years ago
Alternatives and similar repositories for superposition
Users that are interested in superposition are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆181Updated 6 months ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- ☆28Updated 2 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆132Updated 3 years ago
- ☆73Updated 3 years ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆81Updated 3 years ago
- nanoGPT-like codebase for LLM training☆114Updated 2 months ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆136Updated last year
- Understanding how features learned by neural networks evolve throughout training☆41Updated last year
- Sparse Autoencoder Training Library☆56Updated 8 months ago
- Universal Neurons in GPT2 Language Models☆31Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- ☆62Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆40Updated 2 years ago
- ☆23Updated 11 months ago
- ☆167Updated 2 years ago
- gzip Predicts Data-dependent Scaling Laws☆34Updated last year
- Sparse and discrete interpretability tool for neural networks☆65Updated last year
- ☆132Updated 2 years ago
- ☆45Updated 2 years ago
- ☆54Updated last year
- ☆92Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated last year
- ☆53Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- The Energy Transformer block, in JAX☆63Updated 2 years ago
- ☆31Updated 9 months ago