Sea-Snell / grokking
unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"
☆78Updated 2 years ago
Alternatives and similar repositories for grokking:
Users that are interested in grokking are comparing it to the libraries listed below
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.☆36Updated last year
- Omnigrok: Grokking Beyond Algorithmic Data☆55Updated 2 years ago
- PyTorch implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆36Updated 3 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62Updated 3 years ago
- Scaling scaling laws with board games.☆48Updated last year
- Mechanistic Interpretability for Transformer Models☆50Updated 2 years ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆36Updated 2 years ago
- ☆49Updated last year
- ☆62Updated 2 years ago
- ☆26Updated last year
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆105Updated last year
- ☆25Updated 2 years ago
- ☆51Updated 11 months ago
- ☆175Updated last year
- ☆114Updated 8 months ago
- LoRA for arbitrary JAX models and functions☆136Updated last year
- Tools for studying developmental interpretability in neural networks.☆88Updated 3 months ago
- This repo is built to facilitate the training and analysis of autoregressive transformers on maze-solving tasks.☆27Updated 7 months ago
- ☆67Updated 4 months ago
- Code associated to papers on superposition (in ML interpretability)☆27Updated 2 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆58Updated last year
- ☆79Updated last year
- Interpreting how transformers simulate agents performing RL tasks☆80Updated last year
- ☆26Updated last year
- Understand and test language model architectures on synthetic tasks.☆192Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆111Updated 4 months ago
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- Universal Neurons in GPT2 Language Models☆27Updated 10 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆33Updated last year