Flash-Muon: An Efficient Implementation of Muon Optimizer
☆247Jun 15, 2025Updated 9 months ago
Alternatives and similar repositories for flash-muon
Users that are interested in flash-muon are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆52May 19, 2025Updated 10 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- ☆68Mar 21, 2025Updated last year
- ☆65Apr 26, 2025Updated 11 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 6 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆45Nov 1, 2025Updated 4 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- 🔥 A minimal training framework for scaling FLA models☆358Nov 15, 2025Updated 4 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- Muon is Scalable for LLM Training☆1,446Aug 3, 2025Updated 7 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆144Feb 25, 2026Updated last month
- ☆234Nov 19, 2025Updated 4 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆132Jun 24, 2025Updated 9 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆68Jul 8, 2025Updated 8 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆332Updated this week
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 10 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆262Aug 9, 2025Updated 7 months ago
- Combining SOAP and MUON☆19Feb 11, 2025Updated last year
- Helpful tools and examples for working with flex-attention☆1,161Feb 8, 2026Updated last month
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 6 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 6 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆36Mar 7, 2025Updated last year
- ☆124May 28, 2024Updated last year
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- Muon is an optimizer for hidden layers in neural networks☆2,428Jan 19, 2026Updated 2 months ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆723Updated this week
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- implementations and experimentation on mHC by deepseek - https://arxiv.org/abs/2512.24880☆333Feb 17, 2026Updated last month
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆273Feb 2, 2026Updated last month
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Mar 18, 2026Updated last week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆255Feb 13, 2026Updated last month
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year