Fast and memory-efficient exact attention
☆118Apr 3, 2026Updated this week
Alternatives and similar repositories for flash-attention
Users that are interested in flash-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- FlashInfer: Kernel Library for LLM Serving☆5,273Updated this week
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆59Feb 6, 2026Updated 2 months ago
- CUDA Templates for Linear Algebra Subroutines☆102Apr 25, 2024Updated last year
- ☆18Mar 4, 2025Updated last year
- KV cache store for distributed LLM inference☆402Nov 13, 2025Updated 4 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 10 months ago
- performance engineering☆30Jul 11, 2024Updated last year
- ☆11Aug 23, 2023Updated 2 years ago
- A simple implementation of Llama 1, 2. Llama Architecture built from scratch using PyTorch all the models are built from scratch that inc…☆14May 6, 2024Updated last year
- High-performance LLM operator library built on TileLang.☆98Updated this week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆320Jun 10, 2025Updated 9 months ago
- Triton adapter for Ascend. Mirror of https://gitcode.com/ascend/triton-ascend☆117Updated this week
- A Triton-only attention backend for vLLM☆25Mar 17, 2026Updated 3 weeks ago
- ☆18Jan 4, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 2 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,080Updated this week
- ☆155Mar 4, 2025Updated last year
- vLLM plugin for RBLN NPU☆45Updated this week
- Benchmark SGLang on SLURM☆24Apr 2, 2026Updated last week
- ☆33Feb 3, 2025Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,039Updated this week
- A sparse attention kernel supporting mix sparse patterns☆495Jan 18, 2026Updated 2 months ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆380Jul 10, 2025Updated 8 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆167Oct 13, 2025Updated 5 months ago
- 基于语义的中文文本关键词提取算法☆20Mar 24, 2021Updated 5 years ago
- triton for dsa☆60Apr 2, 2026Updated last week
- study of cutlass☆22Nov 10, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆798Apr 6, 2025Updated last year
- Demo for Qwen2.5-VL-3B-Instruct on Axera device.☆15Sep 3, 2025Updated 7 months ago
- Efficient and easy multi-instance LLM serving☆541Mar 12, 2026Updated 3 weeks ago
- patches for huggingface transformers to save memory☆36Jun 2, 2025Updated 10 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆27May 31, 2025Updated 10 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Mar 25, 2026Updated 2 weeks ago
- AST interpreter with clang 5.0.0 and llvm 5.0.0☆14Dec 7, 2019Updated 6 years ago
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆21Mar 7, 2024Updated 2 years ago
- A curated list for Efficient Large Language Models☆11Mar 25, 2024Updated 2 years ago
- ☆22May 5, 2025Updated 11 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆257Feb 13, 2026Updated last month