KindXiaoming / OmnigrokLinks
Omnigrok: Grokking Beyond Algorithmic Data
☆62Updated 2 years ago
Alternatives and similar repositories for Omnigrok
Users that are interested in Omnigrok are comparing it to the libraries listed below
Sorting:
- ☆73Updated last year
- ☆27Updated 2 years ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆81Updated 3 years ago
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.☆40Updated 2 years ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆40Updated 2 years ago
- ☆33Updated last year
- ☆31Updated 9 months ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆63Updated 2 years ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆20Updated last year
- Deep Networks Grok All the Time and Here is Why☆38Updated last year
- Universal Neurons in GPT2 Language Models☆31Updated last year
- ☆62Updated last year
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆174Updated 2 years ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆37Updated 3 years ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year
- ☆24Updated 8 months ago
- Sparse and discrete interpretability tool for neural networks☆65Updated last year
- Pytorch code for experiments on Linear Transformers☆24Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated last year
- Parallelizing non-linear sequential models over the sequence length☆56Updated 6 months ago
- ☆72Updated 3 years ago
- Official code for "Algorithmic Capabilities of Random Transformers" (NeurIPS 2024)☆16Updated last year
- Sparse Autoencoder Training Library☆56Updated 8 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆131Updated 3 years ago
- ☆167Updated 2 years ago
- ☆60Updated 8 months ago
- A centralized place for deep thinking code and experiments☆88Updated 2 years ago
- ☆241Updated last year
- This repository contains PyTorch implementations of various random feature maps for dot product kernels.☆22Updated last year