apple / ml-ademamixLinks
☆65Updated 8 months ago
Alternatives and similar repositories for ml-ademamix
Users that are interested in ml-ademamix are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆84Updated 8 months ago
- research impl of Native Sparse Attention (2502.11089)☆60Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- ☆206Updated 8 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated 11 months ago
- ☆83Updated last year
- ☆53Updated 10 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆149Updated last month
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆55Updated 5 months ago
- Accelerated First Order Parallel Associative Scan☆184Updated 11 months ago
- DeMo: Decoupled Momentum Optimization☆190Updated 8 months ago
- WIP☆94Updated 11 months ago
- ☆81Updated last year
- Focused on fast experimentation and simplicity☆76Updated 7 months ago
- 📄Small Batch Size Training for Language Models☆41Updated this week
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆84Updated last year
- Efficient optimizers☆253Updated last week
- ☆34Updated 11 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆94Updated 2 weeks ago
- A library for unit scaling in PyTorch☆128Updated last month
- ☆115Updated 2 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Updated 2 weeks ago
- 🧱 Modula software package☆216Updated 2 weeks ago
- ☆53Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆103Updated last week
- σ-GPT: A New Approach to Autoregressive Models☆67Updated 11 months ago
- ☆33Updated last month
- LoRA for arbitrary JAX models and functions☆140Updated last year
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 7 months ago