Sea-Snell / grokking
unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"
☆70Updated 2 years ago
Alternatives and similar repositories for grokking:
Users that are interested in grokking are comparing it to the libraries listed below
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.☆35Updated last year
- Omnigrok: Grokking Beyond Algorithmic Data☆52Updated last year
- ☆48Updated 11 months ago
- Scaling scaling laws with board games.☆45Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆35Updated last year
- ☆59Updated 2 years ago
- ☆63Updated last month
- ☆26Updated last year
- A library for efficient patching and automatic circuit discovery.☆48Updated 2 months ago
- ☆24Updated last year
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆60Updated 2 years ago
- Universal Neurons in GPT2 Language Models☆27Updated 8 months ago
- Sparse and discrete interpretability tool for neural networks☆59Updated 11 months ago
- Neural Networks and the Chomsky Hierarchy☆196Updated 9 months ago
- Sparse Autoencoder Training Library☆39Updated 3 months ago
- PyTorch implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆34Updated 3 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆59Updated 3 years ago
- Mechanistic Interpretability for Transformer Models☆49Updated 2 years ago
- ☆109Updated 5 months ago
- ☆25Updated 9 months ago
- Interpreting how transformers simulate agents performing RL tasks☆77Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆57Updated last year
- ☆51Updated 8 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆108Updated 2 years ago
- Tools for studying developmental interpretability in neural networks.☆83Updated this week
- A centralized place for deep thinking code and experiments☆79Updated last year
- Resources from the EleutherAI Math Reading Group☆52Updated last month
- Redwood Research's transformer interpretability tools☆13Updated 2 years ago
- ☆54Updated 2 months ago
- This repo is built to facilitate the training and analysis of autoregressive transformers on maze-solving tasks.☆26Updated 5 months ago