Sea-Snell / grokkingLinks
unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"
☆79Updated 3 years ago
Alternatives and similar repositories for grokking
Users that are interested in grokking are comparing it to the libraries listed below
Sorting:
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.☆39Updated last year
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆190Updated 2 years ago
- Omnigrok: Grokking Beyond Algorithmic Data☆61Updated 2 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆128Updated 3 years ago
- ☆166Updated 2 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62Updated 4 years ago
- [NeurIPS 2023] Learning Transformer Programs☆163Updated last year
- A centralized place for deep thinking code and experiments☆86Updated 2 years ago
- ☆186Updated last year
- ☆83Updated 2 years ago
- nanoGPT-like codebase for LLM training☆107Updated 4 months ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆38Updated 2 years ago
- LoRA for arbitrary JAX models and functions☆142Updated last year
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- A library to create and manage configuration files, especially for machine learning projects.☆79Updated 3 years ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆27Updated 2 years ago
- Neural Networks and the Chomsky Hierarchy☆209Updated last year
- ☆68Updated 2 years ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- ☆52Updated last year
- ☆29Updated last year
- ☆53Updated last year
- Sparse Autoencoder Training Library☆54Updated 4 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆129Updated 8 months ago
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆108Updated last year
- ☆31Updated 5 months ago
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆192Updated last year