KindXiaoming / Omnigrok
Omnigrok: Grokking Beyond Algorithmic Data
☆52Updated last year
Alternatives and similar repositories for Omnigrok:
Users that are interested in Omnigrok are comparing it to the libraries listed below
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.☆35Updated last year
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆70Updated 2 years ago
- ☆24Updated last year
- ☆63Updated last month
- ☆59Updated 2 years ago
- Deep Learning & Information Bottleneck☆53Updated last year
- PyTorch implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆34Updated 3 years ago
- Efficient empirical NTKs in PyTorch☆18Updated 2 years ago
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆58Updated last year
- ☆26Updated last year
- ☆83Updated last year
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature☆127Updated 5 months ago
- Pytorch code for experiments on Linear Transformers☆17Updated last year
- A centralized place for deep thinking code and experiments☆79Updated last year
- ☆211Updated 8 months ago
- ☆45Updated this week
- Transformers with doubly stochastic attention☆44Updated 2 years ago
- Code for the paper: "Tensor Programs II: Neural Tangent Kernel for Any Architecture"☆103Updated 4 years ago
- Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"☆20Updated last year
- Universal Neurons in GPT2 Language Models☆27Updated 8 months ago
- Sparse Autoencoder Training Library☆39Updated 3 months ago
- Artificial Kuramoto Oscillatory Neurons☆46Updated last week
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆82Updated last year
- ☆16Updated 9 months ago
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆164Updated last year
- ☆24Updated last week
- ☆109Updated 5 months ago
- ☆17Updated last year
- ☆41Updated this week