KellerJordan / top-sgdLinks
Optimization algorithm which fits a ResNet to CIFAR-10 5x faster than SGD / Adam (with terrible generalization)
☆14Updated 2 years ago
Alternatives and similar repositories for top-sgd
Users that are interested in top-sgd are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆188Updated this week
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated last year
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆37Updated 3 years ago
- Euclidean Wasserstein-2 optimal transportation☆47Updated 2 years ago
- Parameter-Free Optimizers for Pytorch☆130Updated last year
- ☆62Updated last year
- Easy Hypernetworks in Pytorch and Jax☆106Updated 2 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆37Updated 2 years ago
- LoRA for arbitrary JAX models and functions☆143Updated last year
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- minGPT in JAX☆48Updated 3 years ago
- Open source code for EigenGame.☆34Updated 2 years ago
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆63Updated 2 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago
- Deep Networks Grok All the Time and Here is Why☆38Updated last year
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classi…☆130Updated 3 years ago
- A collection of meta-learning algorithms in Jax☆23Updated 3 years ago
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo…☆74Updated 5 months ago
- A simple library for scaling up JAX programs☆144Updated last month
- Lightning-like training API for JAX with Flax☆44Updated last year
- ☆233Updated 10 months ago
- ☆54Updated last year
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆60Updated 3 years ago
- ☆73Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆69Updated last year
- Distributed K-FAC preconditioner for PyTorch☆93Updated last week
- ☆230Updated last year
- ☆40Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆40Updated 2 years ago
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated 2 years ago