nil0x9 / flash-muonView external linksLinks
Flash-Muon: An Efficient Implementation of Muon Optimizer
☆233Jun 15, 2025Updated 7 months ago
Alternatives and similar repositories for flash-muon
Users that are interested in flash-muon are comparing it to the libraries listed below
Sorting:
- ☆52May 19, 2025Updated 8 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 7 months ago
- ☆67Mar 21, 2025Updated 10 months ago
- ☆65Apr 26, 2025Updated 9 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- 🔥 A minimal training framework for scaling FLA models☆344Nov 15, 2025Updated 2 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆96Sep 19, 2025Updated 4 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- ☆44Nov 1, 2025Updated 3 months ago
- DeeperGEMM: crazy optimized version☆73May 5, 2025Updated 9 months ago
- Muon is Scalable for LLM Training☆1,426Aug 3, 2025Updated 6 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆326Updated this week
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆137Dec 19, 2025Updated last month
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148May 10, 2025Updated 9 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 7 months ago
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated 11 months ago
- ☆221Nov 19, 2025Updated 2 months ago
- ☆66Jul 8, 2025Updated 7 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Aug 9, 2025Updated 6 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆744Updated this week
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆631Feb 6, 2026Updated last week
- Implement Flash Attention using Cute.☆100Dec 17, 2024Updated last year
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- Helpful tools and examples for working with flex-attention☆1,127Updated this week
- Combining SOAP and MUON☆19Feb 11, 2025Updated last year
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on …☆14Sep 18, 2025Updated 4 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,350Updated this week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆269Feb 2, 2026Updated last week
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- implementations and experimentation on mHC by deepseek - https://arxiv.org/abs/2512.24880☆302Updated this week
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 5 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Muon is an optimizer for hidden layers in neural networks☆2,290Jan 19, 2026Updated 3 weeks ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Jan 28, 2025Updated last year
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆19Jan 15, 2025Updated last year
- ☆124May 28, 2024Updated last year