KellerJordan / MuonLinks
Muon: An optimizer for hidden layers in neural networks
☆897Updated last week
Alternatives and similar repositories for Muon
Users that are interested in Muon are comparing it to the libraries listed below
Sorting:
- Helpful tools and examples for working with flex-attention☆831Updated last week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆653Updated last week
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆562Updated 4 months ago
- Muon is Scalable for LLM Training☆1,077Updated 2 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆421Updated last month
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models☆698Updated 2 months ago
- ☆471Updated last week
- Code for BLT research paper☆1,686Updated last month
- 🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton☆2,753Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆519Updated last month
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆700Updated 3 months ago
- ☆286Updated last month
- When it comes to optimizers, it's always better to be safe than sorry☆241Updated 2 months ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆988Updated 3 weeks ago
- ☆567Updated 2 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆881Updated last month
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,548Updated 2 weeks ago
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆411Updated 10 months ago
- Dream 7B, a large diffusion language model☆764Updated last week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆284Updated 2 weeks ago
- [NeurIPS 2024] Simple and Effective Masked Diffusion Language Model☆427Updated 2 weeks ago
- The official implementation of Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆375Updated this week
- Annotated version of the Mamba paper☆485Updated last year
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆799Updated 8 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆556Updated this week
- A bibliography and survey of the papers surrounding o1☆1,199Updated 7 months ago
- Pretraining code for a large-scale depth-recurrent language model☆782Updated last week
- TransMLA: Multi-Head Latent Attention Is All You Need☆302Updated this week
- Ring attention implementation with flash attention☆789Updated last week
- Training Large Language Model to Reason in a Continuous Latent Space☆1,155Updated 4 months ago