mdy666 / Scalable-Flash-Native-Sparse-AttentionLinks
☆47Updated last month
Alternatives and similar repositories for Scalable-Flash-Native-Sparse-Attention
Users that are interested in Scalable-Flash-Native-Sparse-Attention are comparing it to the libraries listed below
Sorting:
- flex-block-attn: an efficient block sparse attention computation library☆107Updated last month
- Fast and memory-efficient exact kmeans☆136Updated 2 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆229Updated 7 months ago
- Triton implement of bi-directional (non-causal) linear attention☆63Updated 11 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Updated 3 months ago
- ☆220Updated 2 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- ☆128Updated 5 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆161Updated 3 months ago
- ☆133Updated 8 months ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆45Updated last week
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆69Updated 6 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆266Updated 6 months ago
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆93Updated 5 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Updated 5 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆128Updated 7 months ago
- Efficient triton implementation of Native Sparse Attention.☆261Updated 8 months ago
- ☆104Updated 11 months ago
- 🔥 A minimal training framework for scaling FLA models☆341Updated 2 months ago
- ☆22Updated 3 weeks ago
- Efficient 2:4 sparse training algorithms and implementations☆58Updated last year
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆53Updated 3 months ago
- ☆158Updated 11 months ago
- Quantized Attention on GPU☆44Updated last year
- Distributed MoE in a Single Kernel [NeurIPS '25]☆188Updated this week
- Triton implementation of FlashAttention2 that adds Custom Masks.☆163Updated last year
- 16-fold memory access reduction with nearly no loss☆109Updated 10 months ago
- ☆63Updated 6 months ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Updated 11 months ago
- ☆83Updated last week