KindXiaoming / Omnigrok
Omnigrok: Grokking Beyond Algorithmic Data
☆55Updated 2 years ago
Alternatives and similar repositories for Omnigrok:
Users that are interested in Omnigrok are comparing it to the libraries listed below
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.☆36Updated last year
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆78Updated 2 years ago
- ☆67Updated 4 months ago
- ☆25Updated 2 years ago
- Deep Networks Grok All the Time and Here is Why☆34Updated 11 months ago
- Deep Learning & Information Bottleneck☆60Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆36Updated 2 years ago
- ☆52Updated 6 months ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62Updated 3 years ago
- PyTorch implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆36Updated 3 years ago
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆60Updated last year
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- Transformers with doubly stochastic attention☆45Updated 2 years ago
- ☆19Updated last week
- ☆28Updated 3 weeks ago
- Code for GFlowNet-EM, a novel algorithm for fitting latent variable models with compositional latents and an intractable true posterior.☆40Updated last year
- ☆62Updated 2 years ago
- ☆62Updated 3 years ago
- Efficient empirical NTKs in PyTorch☆18Updated 2 years ago
- NF-Layers for constructing neural functionals.☆84Updated last year
- Pytorch code for experiments on Linear Transformers☆20Updated last year
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆168Updated last year
- Neural Tangent Kernel Papers☆108Updated 3 months ago
- ☆46Updated 2 weeks ago
- ☆49Updated last year
- ☆26Updated last year
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆24Updated 10 months ago
- ☆16Updated 7 months ago
- Laplace Redux -- Effortless Bayesian Deep Learning☆43Updated 2 years ago