Ping-C / optimizerLinks
This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explained Without the Implicit Bias of Gradient Descent"
☆40Updated 2 years ago
Alternatives and similar repositories for optimizer
Users that are interested in optimizer are comparing it to the libraries listed below
Sorting:
- A centralized place for deep thinking code and experiments☆88Updated 2 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- Sparse and discrete interpretability tool for neural networks☆65Updated last year
- ☆62Updated last year
- ModelDiff: A Framework for Comparing Learning Algorithms☆58Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆73Updated last year
- ☆27Updated 2 years ago
- ☆52Updated last year
- PyTorch implementation for "Long Horizon Temperature Scaling", ICML 2023☆20Updated 2 years ago
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆52Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆32Updated 2 years ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆37Updated 3 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆69Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- ☆50Updated last week
- ☆54Updated last year
- Omnigrok: Grokking Beyond Algorithmic Data☆62Updated 2 years ago
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆55Updated last year
- Official code for "Algorithmic Capabilities of Random Transformers" (NeurIPS 2024)☆16Updated last year
- The Energy Transformer block, in JAX☆63Updated 2 years ago
- Code for minimum-entropy coupling.☆32Updated last month
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- Code for "Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?" [ICML 2023]☆37Updated last year
- ☆18Updated 3 years ago
- ☆45Updated 2 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆30Updated 2 months ago
- ☆63Updated 3 years ago
- Deep Networks Grok All the Time and Here is Why☆38Updated last year