apple / ml-ademamixLinks
☆67Updated 10 months ago
Alternatives and similar repositories for ml-ademamix
Users that are interested in ml-ademamix are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆83Updated 10 months ago
- ☆216Updated 10 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 7 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Accelerated First Order Parallel Associative Scan☆189Updated last year
- ☆58Updated last year
- ☆91Updated last year
- Supporting code for the blog post on modular manifolds.☆71Updated 2 weeks ago
- Focused on fast experimentation and simplicity☆75Updated 9 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated last year
- research impl of Native Sparse Attention (2502.11089)☆61Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆164Updated 3 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- DeMo: Decoupled Momentum Optimization☆192Updated 10 months ago
- ☆120Updated 4 months ago
- ☆34Updated last year
- 📄Small Batch Size Training for Language Models☆63Updated last week
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆102Updated 9 months ago
- WIP☆93Updated last year
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- ☆53Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆71Updated this week
- An implementation of PSGD Kron second-order optimizer for PyTorch☆95Updated 2 months ago
- Efficient optimizers☆265Updated last week
- ☆102Updated 2 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- Collection of autoregressive model implementation☆86Updated 5 months ago
- ☆40Updated last month
- FlashRNN - Fast RNN Kernels with I/O Awareness☆98Updated 3 months ago
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year