d-doshi / GrokkingLinks
☆14Updated 3 months ago
Alternatives and similar repositories for Grokking
Users that are interested in Grokking are comparing it to the libraries listed below
Sorting:
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- Recycling diverse models☆44Updated 2 years ago
- Deep Learning & Information Bottleneck☆60Updated last year
- ☆16Updated last year
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆22Updated last year
- ☆14Updated last year
- ☆19Updated 10 months ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- The code for the Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness paper☆21Updated 7 months ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆56Updated last year
- Codebase for Mechanistic Mode Connectivity☆14Updated last year
- Computationally friendly hyper-parameter search with DP-SGD☆25Updated 5 months ago
- ☆30Updated 10 months ago
- ☆18Updated 2 years ago
- Privacy backdoors☆51Updated last year
- Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)☆30Updated last year
- Pytorch Datasets for Easy-To-Hard☆27Updated 4 months ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆37Updated 2 years ago
- Provably (and non-vacuously) bounding test error of deep neural networks under distribution shift with unlabeled test data.☆10Updated last year
- Understanding Rare Spurious Correlations in Neural Network☆12Updated 3 years ago
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆70Updated 2 years ago
- DiWA: Diverse Weight Averaging for Out-of-Distribution Generalization☆31Updated 2 years ago
- ☆45Updated 2 years ago
- Omnigrok: Grokking Beyond Algorithmic Data☆58Updated 2 years ago
- [NeurIPS 2023] and [ICLR 2024] for robustness certification.☆10Updated 6 months ago
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆26Updated 11 months ago
- Deep Networks Grok All the Time and Here is Why☆36Updated last year
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆17Updated last year
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆20Updated last year