opooladz / Preconditioned-Stochastic-Gradient-DescentLinks
A repo based on XiLin Li's PSGD repo that extends some of the experiments.
☆14Updated last year
Alternatives and similar repositories for Preconditioned-Stochastic-Gradient-Descent
Users that are interested in Preconditioned-Stochastic-Gradient-Descent are comparing it to the libraries listed below
Sorting:
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated 2 years ago
- A State-Space Model with Rational Transfer Function Representation.☆83Updated last year
- ☆18Updated last year
- supporting pytorch FSDP for optimizers☆84Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Updated last year
- ☆19Updated 2 months ago
- Scalable and Stable Parallelization of Nonlinear RNNS☆28Updated 3 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Supporting code for the blog post on modular manifolds.☆115Updated 4 months ago
- ☆250Updated last year
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆73Updated 2 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆84Updated 2 months ago
- ☆124Updated 8 months ago
- ☆62Updated last year
- FID computation in Jax/Flax.☆29Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆93Updated 2 years ago
- ☆70Updated last year
- ☆168Updated 3 months ago
- Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"☆29Updated 5 years ago
- ☆32Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆172Updated last week
- Utilities for PyTorch distributed☆25Updated 11 months ago
- Implementation of Diffusion Transformers and Rectified Flow in Jax☆27Updated last year
- An implementation of PSGD Kron second-order optimizer for PyTorch☆98Updated 6 months ago
- Focused on fast experimentation and simplicity☆80Updated last year
- ☆53Updated 2 years ago
- Efficient optimizers☆281Updated last month
- ☆35Updated last year
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- The 2D discrete wavelet transform for JAX☆44Updated 2 years ago