wattenberg / superpositionLinks
Code associated to papers on superposition (in ML interpretability)
☆29Updated 2 years ago
Alternatives and similar repositories for superposition
Users that are interested in superposition are comparing it to the libraries listed below
Sorting:
- ☆26Updated 2 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆149Updated last month
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆78Updated 3 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆128Updated 2 years ago
- ☆166Updated 2 years ago
- nanoGPT-like codebase for LLM training☆102Updated 2 months ago
- Neural Networks and the Chomsky Hierarchy☆207Updated last year
- ☆83Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆85Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆123Updated 7 months ago
- A centralized place for deep thinking code and experiments☆85Updated 2 years ago
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆51Updated last year
- gzip Predicts Data-dependent Scaling Laws☆35Updated last year
- Understand and test language model architectures on synthetic tasks.☆221Updated 3 weeks ago
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- ☆68Updated 2 years ago
- The Energy Transformer block, in JAX☆59Updated last year
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62Updated 4 years ago
- seqax = sequence modeling + JAX☆165Updated 2 weeks ago
- Understanding how features learned by neural networks evolve throughout training☆36Updated 9 months ago
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆172Updated 2 years ago
- ☆53Updated last year
- Omnigrok: Grokking Beyond Algorithmic Data☆61Updated 2 years ago
- ☆53Updated 10 months ago
- LoRA for arbitrary JAX models and functions☆140Updated last year
- Learning Universal Predictors☆78Updated last year
- ☆30Updated 4 months ago