nil0x9 / flash-muon
Flash-Muon: An Efficient Implementation of Muon Optimzer
☆91Updated this week
Alternatives and similar repositories for flash-muon:
Users that are interested in flash-muon are comparing it to the libraries listed below
- 🔥 A minimal training framework for scaling FLA models☆117Updated this week
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆40Updated this week
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 10 months ago
- ☆126Updated 2 months ago
- Fast and memory-efficient exact attention☆68Updated 2 months ago
- Efficient triton implementation of Native Sparse Attention.☆142Updated 3 weeks ago
- ☆20Updated last month
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆100Updated 2 weeks ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆60Updated 3 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆120Updated this week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 6 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆87Updated this week
- Transformers components but in Triton☆32Updated last month
- Boosting 4-bit inference kernels with 2:4 Sparsity☆73Updated 8 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Using FlexAttention to compute attention with different masking patterns☆43Updated 7 months ago
- ☆68Updated last week
- Here we will test various linear attention designs.☆60Updated last year
- Code for data-aware compression of DeepSeek models☆21Updated 3 weeks ago
- DPO, but faster 🚀☆41Updated 5 months ago
- ☆54Updated last month
- Triton-based implementation of Sparse Mixture of Experts.☆212Updated 5 months ago
- Linear Attention Sequence Parallelism (LASP)☆82Updated 11 months ago
- ☆17Updated last week
- ☆44Updated 2 months ago
- ☆69Updated 2 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆45Updated 6 months ago
- Work in progress.☆58Updated 3 weeks ago
- Quantized Attention on GPU☆45Updated 5 months ago
- Experiment of using Tangent to autodiff triton☆78Updated last year