Ping-C / optimizer
This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explained Without the Implicit Bias of Gradient Descent"
☆35Updated last year
Alternatives and similar repositories for optimizer:
Users that are interested in optimizer are comparing it to the libraries listed below
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- ☆51Updated 4 months ago
- A centralized place for deep thinking code and experiments☆81Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆58Updated last year
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆21Updated 7 months ago
- ☆63Updated 2 months ago
- ☆11Updated 2 years ago
- ☆35Updated last year
- ☆24Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆60Updated 4 months ago
- Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight https://openreview.net/forum?id=XJk19XzGq2J☆65Updated 9 months ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆55Updated last year
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆28Updated last year
- SGD with large step sizes learns sparse features [ICML 2023]☆32Updated last year
- Distilling Model Failures as Directions in Latent Space☆46Updated 2 years ago
- Pytorch Datasets for Easy-To-Hard☆27Updated last month
- Transformers with doubly stochastic attention☆44Updated 2 years ago
- ☆49Updated last year
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆46Updated last year
- Recycling diverse models☆44Updated 2 years ago
- The Energy Transformer block, in JAX☆55Updated last year
- Pytorch code for "Improving Self-Supervised Learning by Characterizing Idealized Representations"☆40Updated 2 years ago
- ☆17Updated 2 years ago
- ☆60Updated 3 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆59Updated 3 years ago
- Sparse and discrete interpretability tool for neural networks☆58Updated last year