Sea-Snell / grokkingLinks
unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"
☆79Updated 3 years ago
Alternatives and similar repositories for grokking
Users that are interested in grokking are comparing it to the libraries listed below
Sorting:
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆191Updated 2 years ago
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.☆39Updated 2 years ago
- ☆27Updated 2 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- nanoGPT-like codebase for LLM training☆107Updated 4 months ago
- Omnigrok: Grokking Beyond Algorithmic Data☆62Updated 2 years ago
- ☆166Updated 2 years ago
- ☆186Updated last year
- Neural Networks and the Chomsky Hierarchy☆209Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆39Updated 2 years ago
- LoRA for arbitrary JAX models and functions☆142Updated last year
- ☆69Updated 2 years ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆83Updated 2 years ago
- Mechanistic Interpretability for Transformer Models☆52Updated 3 years ago
- Train very large language models in Jax.☆209Updated last year
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆108Updated last year
- ☆53Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆129Updated 9 months ago
- ☆52Updated last year
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- This repo is built to facilitate the training and analysis of autoregressive transformers on maze-solving tasks.☆31Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆164Updated 3 months ago
- Scaling scaling laws with board games.☆53Updated 2 years ago
- ☆106Updated 7 months ago
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- ☆127Updated last year