nil0x9 / flash-muonLinks
Flash-Muon: An Efficient Implementation of Muon Optimizer
☆225Updated 6 months ago
Alternatives and similar repositories for flash-muon
Users that are interested in flash-muon are comparing it to the libraries listed below
Sorting:
- ☆133Updated 7 months ago
- 🔥 A minimal training framework for scaling FLA models☆335Updated last month
- ☆265Updated 7 months ago
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 3 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆160Updated last year
- ☆103Updated 10 months ago
- Fast and memory-efficient exact attention☆74Updated 10 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆128Updated 6 months ago
- Normalized Transformer (nGPT)☆195Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆259Updated 3 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆421Updated 3 months ago
- ☆115Updated 3 weeks ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆134Updated 3 weeks ago
- Efficient triton implementation of Native Sparse Attention.☆258Updated 7 months ago
- Muon fsdp 2☆48Updated 5 months ago
- Official implementation for Training LLMs with MXFP4☆116Updated 8 months ago
- ☆156Updated 11 months ago
- Block Diffusion for Ultra-Fast Speculative Decoding☆313Updated last week
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆104Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 11 months ago
- Low-bit optimizers for PyTorch☆137Updated 2 years ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆257Updated 5 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆132Updated 2 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- ☆21Updated 2 weeks ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Updated 5 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 6 months ago
- ☆216Updated last month