mit-han-lab / flash-mobaView external linksLinks
☆221Nov 19, 2025Updated 2 months ago
Alternatives and similar repositories for flash-moba
Users that are interested in flash-moba are comparing it to the libraries listed below
Sorting:
- NVIDIA cuTile learn☆162Dec 9, 2025Updated 2 months ago
- ☆27Dec 31, 2025Updated last month
- ☆35Mar 7, 2025Updated 11 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- ☆131May 29, 2025Updated 8 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter☆135Dec 5, 2025Updated 2 months ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆191Feb 7, 2026Updated last week
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago
- ☆22May 5, 2025Updated 9 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆964Feb 5, 2026Updated last week
- ☆270Jun 6, 2025Updated 8 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆235Jun 15, 2025Updated 7 months ago
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆87Nov 29, 2025Updated 2 months ago
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆89Jan 29, 2026Updated 2 weeks ago
- ☆65Apr 26, 2025Updated 9 months ago
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆281Dec 1, 2025Updated 2 months ago
- A sparse attention kernel supporting mix sparse patterns☆455Jan 18, 2026Updated 3 weeks ago
- DeeperGEMM: crazy optimized version☆73May 5, 2025Updated 9 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Aug 9, 2025Updated 6 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆66Feb 12, 2025Updated last year
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Dec 11, 2025Updated 2 months ago
- ☆52May 19, 2025Updated 8 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- Fast and memory-efficient exact kmeans☆138Updated this week
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention☆264Jan 17, 2026Updated 3 weeks ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,350Updated this week
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 7 months ago
- ☆13Oct 3, 2024Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 7 months ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆290Nov 7, 2025Updated 3 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 9 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆927Dec 31, 2025Updated last month
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆46Jan 21, 2026Updated 3 weeks ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆109Oct 11, 2025Updated 4 months ago
- ☆104Nov 7, 2024Updated last year
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆639Updated this week