wattenberg / superpositionLinks
Code associated to papers on superposition (in ML interpretability)
☆28Updated 2 years ago
Alternatives and similar repositories for superposition
Users that are interested in superposition are comparing it to the libraries listed below
Sorting:
- Minimal but scalable implementation of large language models in JAX☆35Updated 7 months ago
- ☆26Updated 2 years ago
- ☆37Updated last year
- ☆78Updated 11 months ago
- ☆53Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆58Updated last year
- Redwood Research's transformer interpretability tools☆15Updated 3 years ago
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆82Updated last year
- Universal Neurons in GPT2 Language Models☆29Updated last year
- Understanding how features learned by neural networks evolve throughout training☆35Updated 7 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- ☆83Updated last year
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated last year
- Jax/Flax rewrite of Karpathy's nanoGPT☆57Updated 2 years ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆78Updated 2 years ago
- Learning Universal Predictors☆76Updated 10 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆82Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆30Updated 7 months ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆37Updated 2 years ago
- ☆28Updated last year
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- ☆45Updated last year
- gzip Predicts Data-dependent Scaling Laws☆35Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆134Updated this week
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Sparse Autoencoder Training Library☆52Updated last month
- ☆67Updated 2 years ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆54Updated last year
- ☆29Updated 2 months ago