KellerJordan / MuonLinks
Muon is an optimizer for hidden layers in neural networks
☆2,116Updated last month
Alternatives and similar repositories for Muon
Users that are interested in Muon are comparing it to the libraries listed below
Sorting:
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆790Updated 4 months ago
- Helpful tools and examples for working with flex-attention☆1,089Updated last week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,089Updated this week
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆579Updated 10 months ago
- Muon is Scalable for LLM Training☆1,387Updated 4 months ago
- H-Net: Hierarchical Network with Dynamic Chunking☆797Updated last month
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch☆1,748Updated this week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆944Updated 9 months ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,294Updated last year
- Code for BLT research paper☆2,018Updated last month
- Official PyTorch implementation for "Large Language Diffusion Models"☆3,424Updated last month
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,923Updated 4 months ago
- Schedule-Free Optimization in PyTorch☆2,241Updated 7 months ago
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,718Updated 8 months ago
- [ICLR 2025 Oral] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models☆920Updated 5 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆856Updated 2 months ago
- Dream 7B, a large diffusion language model☆1,115Updated last month
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆933Updated last month
- Training Large Language Model to Reason in a Continuous Latent Space☆1,411Updated 4 months ago
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆435Updated last week
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆436Updated last month
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆783Updated 4 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,285Updated 3 weeks ago
- ☆565Updated 3 months ago
- ☆647Updated 8 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆445Updated 7 months ago
- When it comes to optimizers, it's always better to be safe than sorry☆397Updated 3 months ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,177Updated 3 months ago
- Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation (NeurIPS 2025)☆526Updated 3 months ago
- A PyTorch library for implementing flow matching algorithms, featuring continuous and discrete flow matching implementations. It includes…☆3,896Updated 3 months ago