svg-project / flash-kmeansView external linksLinks
Fast and memory-efficient exact kmeans
☆138Updated this week
Alternatives and similar repositories for flash-kmeans
Users that are interested in flash-kmeans are comparing it to the libraries listed below
Sorting:
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 6 months ago
- ☆36Sep 6, 2025Updated 5 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- Fast and memory-efficient exact attention☆18Jan 23, 2026Updated 3 weeks ago
- ☆52May 19, 2025Updated 8 months ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆627Feb 3, 2026Updated last week
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆290Nov 7, 2025Updated 3 months ago
- ☆65Apr 26, 2025Updated 9 months ago
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- ☆221Nov 19, 2025Updated 2 months ago
- ☆118May 19, 2025Updated 8 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,350Updated this week
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated 11 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆235Jun 15, 2025Updated 8 months ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- ☆86Updated this week
- ☆38Aug 7, 2025Updated 6 months ago
- patches for huggingface transformers to save memory☆34Jun 2, 2025Updated 8 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 7 months ago
- ☆130Aug 18, 2025Updated 5 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆583Feb 6, 2026Updated last week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆37Feb 6, 2026Updated last week
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆639Updated this week
- flex-block-attn: an efficient block sparse attention computation library☆108Dec 26, 2025Updated last month
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 7 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 4 months ago
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆87Nov 29, 2025Updated 2 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Jul 17, 2025Updated 6 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆939Dec 31, 2025Updated last month
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆260Nov 18, 2024Updated last year
- A docker image for One Student One Chip's debug exam☆10Sep 22, 2023Updated 2 years ago
- ☆13Dec 9, 2024Updated last year
- ☆13Sep 2, 2023Updated 2 years ago