apple / ml-ademamix
☆59Updated 4 months ago
Alternatives and similar repositories for ml-ademamix:
Users that are interested in ml-ademamix are comparing it to the libraries listed below
- supporting pytorch FSDP for optimizers☆80Updated 3 months ago
- ☆33Updated 6 months ago
- Focused on fast experimentation and simplicity☆70Updated 3 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆44Updated 3 weeks ago
- ☆19Updated last week
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆23Updated 2 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆50Updated last week
- research impl of Native Sparse Attention (2502.11089)☆53Updated last month
- ☆76Updated 8 months ago
- JAX Implementation of Black Forest Labs' Flux.1 family of models☆30Updated 5 months ago
- The 2D discrete wavelet transform for JAX☆41Updated 2 years ago
- ☆53Updated last year
- ☆31Updated 10 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 5 months ago
- Automatically take good care of your preemptible TPUs☆36Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 2 months ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆73Updated 8 months ago
- ☆52Updated 5 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 2 weeks ago
- A repo based on XiLin Li's PSGD repo that extends some of the experiments.☆14Updated 5 months ago
- Diffusion models in PyTorch☆99Updated last week
- Collection of autoregressive model implementation☆83Updated last month
- Code for the paper "Function-Space Learning Rates"☆17Updated last month
- ☆172Updated 4 months ago
- Implementation of PSGD optimizer in JAX☆30Updated 3 months ago
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆88Updated 8 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆57Updated last month
- Utilities for PyTorch distributed☆23Updated last month
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆96Updated 7 months ago