d-doshi / Grokking
☆13Updated 2 months ago
Alternatives and similar repositories for Grokking
Users that are interested in Grokking are comparing it to the libraries listed below
Sorting:
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Deep Learning & Information Bottleneck☆60Updated last year
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- Omnigrok: Grokking Beyond Algorithmic Data☆56Updated 2 years ago
- ☆16Updated last year
- Understanding Rare Spurious Correlations in Neural Network☆12Updated 2 years ago
- ☆60Updated 3 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆31Updated last year
- Codebase for Mechanistic Mode Connectivity☆14Updated last year
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆17Updated last year
- ModelDiff: A Framework for Comparing Learning Algorithms☆56Updated last year
- ☆43Updated 2 years ago
- Pytorch Datasets for Easy-To-Hard☆27Updated 4 months ago
- ☆24Updated 3 months ago
- ☆33Updated 4 months ago
- Recycling diverse models☆44Updated 2 years ago
- ☆14Updated last year
- ☆67Updated 5 months ago
- ☆38Updated 3 years ago
- Distilling Model Failures as Directions in Latent Space☆46Updated 2 years ago
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆25Updated 10 months ago
- Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)☆28Updated last year
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆22Updated last year
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆18Updated 2 years ago
- ☆28Updated last year
- ☆19Updated 10 months ago
- Privacy backdoors☆51Updated last year
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆20Updated last year
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago