nikhilvyas / SOAPLinks
☆230Updated last year
Alternatives and similar repositories for SOAP
Users that are interested in SOAP are comparing it to the libraries listed below
Sorting:
- Efficient optimizers☆277Updated this week
- supporting pytorch FSDP for optimizers☆84Updated last year
- Accelerated First Order Parallel Associative Scan☆193Updated last year
- 🧱 Modula software package☆316Updated 4 months ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆188Updated last week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆180Updated 5 months ago
- ☆62Updated last year
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆335Updated last month
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- ☆69Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆135Updated last year
- ☆286Updated last year
- An implementation of PSGD Kron second-order optimizer for PyTorch☆97Updated 5 months ago
- LoRA for arbitrary JAX models and functions☆143Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆83Updated last year
- A library for unit scaling in PyTorch☆133Updated 5 months ago
- ☆122Updated 6 months ago
- ☆50Updated last week
- Supporting code for the blog post on modular manifolds.☆107Updated 2 months ago
- WIP☆93Updated last year
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- A simple library for scaling up JAX programs☆144Updated last month
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- ☆91Updated last year
- Implementation of PSGD optimizer in JAX☆35Updated 11 months ago
- Minimal yet performant LLM examples in pure JAX☆219Updated 3 weeks ago
- Normalized Transformer (nGPT)☆194Updated last year
- nanoGPT-like codebase for LLM training☆113Updated last month
- Getting crystal-like representations with harmonic loss☆193Updated 8 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆69Updated last year