wattenberg / superpositionLinks
Code associated to papers on superposition (in ML interpretability)
☆28Updated 2 years ago
Alternatives and similar repositories for superposition
Users that are interested in superposition are comparing it to the libraries listed below
Sorting:
- ☆26Updated 2 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆81Updated last year
- ☆45Updated last year
- ☆53Updated last year
- The Energy Transformer block, in JAX☆56Updated last year
- Minimal but scalable implementation of large language models in JAX☆34Updated 7 months ago
- ☆27Updated last year
- ☆37Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated 11 months ago
- ☆29Updated 2 months ago
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- ☆78Updated 10 months ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆78Updated 2 years ago
- ☆28Updated 6 months ago
- Sparse Autoencoder Training Library☆50Updated last month
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆58Updated last year
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- Resources from the EleutherAI Math Reading Group☆53Updated 3 months ago
- Understanding how features learned by neural networks evolve throughout training☆34Updated 7 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆73Updated 7 months ago
- Einsum-like high-level array sharding API for JAX☆34Updated 10 months ago
- Machine Learning eXperiment Utilities☆46Updated 11 months ago
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.☆36Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆34Updated 10 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Universal Neurons in GPT2 Language Models☆29Updated last year