KellerJordan / top-sgdLinks
Optimization algorithm which fits a ResNet to CIFAR-10 5x faster than SGD / Adam (with terrible generalization)
☆14Updated last year
Alternatives and similar repositories for top-sgd
Users that are interested in top-sgd are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆180Updated this week
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆88Updated last year
- Parameter-Free Optimizers for Pytorch☆130Updated last year
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- A simple library for scaling up JAX programs☆143Updated 10 months ago
- Open source code for EigenGame.☆30Updated 2 years ago
- Euclidean Wasserstein-2 optimal transportation☆47Updated 2 years ago
- ☆57Updated 11 months ago
- ☆34Updated 9 months ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- ☆210Updated 9 months ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated last year
- LoRA for arbitrary JAX models and functions☆142Updated last year
- ☆40Updated last year
- minGPT in JAX☆48Updated 3 years ago
- Deep Networks Grok All the Time and Here is Why☆37Updated last year
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆63Updated 2 years ago
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆94Updated 9 months ago
- Easy Hypernetworks in Pytorch and Jax☆104Updated 2 years ago
- ☆65Updated 10 months ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- ☆52Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆36Updated last year
- Implementation of PSGD optimizer in JAX☆34Updated 8 months ago
- ☆118Updated 3 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆67Updated 11 months ago
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo…☆71Updated 2 months ago
- 🧱 Modula software package☆237Updated 3 weeks ago
- Meta Optimal Transport☆103Updated 2 years ago
- ☆115Updated last week