opooladz / Preconditioned-Stochastic-Gradient-DescentLinks
A repo based on XiLin Li's PSGD repo that extends some of the experiments.
☆14Updated 10 months ago
Alternatives and similar repositories for Preconditioned-Stochastic-Gradient-Descent
Users that are interested in Preconditioned-Stochastic-Gradient-Descent are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆84Updated 8 months ago
- ☆57Updated 11 months ago
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- ☆115Updated 2 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆65Updated 3 weeks ago
- ☆207Updated 9 months ago
- ☆65Updated 9 months ago
- Implementation of Diffusion Transformers and Rectified Flow in Jax☆25Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆79Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Updated last month
- ☆17Updated last year
- ☆34Updated 11 months ago
- Efficient optimizers☆259Updated last month
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- FID computation in Jax/Flax.☆28Updated last year
- Focused on fast experimentation and simplicity☆75Updated 8 months ago
- Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"☆29Updated 5 years ago
- ☆19Updated 3 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 8 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆68Updated 3 weeks ago
- Utilities for PyTorch distributed☆25Updated 6 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆96Updated last month
- ☆53Updated last year
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- Train vision models using JAX and 🤗 transformers☆99Updated last week
- Code for the paper "Function-Space Learning Rates"☆23Updated 3 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆89Updated 2 months ago
- ☆31Updated last year
- ☆34Updated last year