Fast and memory-efficient exact attention
☆122Apr 24, 2026Updated 2 weeks ago
Alternatives and similar repositories for flash-attention
Users that are interested in flash-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- FlashInfer: Kernel Library for LLM Serving☆5,544Updated this week
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆60Feb 6, 2026Updated 3 months ago
- CUDA Templates for Linear Algebra Subroutines☆101Apr 25, 2024Updated 2 years ago
- ☆19Mar 4, 2025Updated last year
- KV cache store for distributed LLM inference☆416Nov 13, 2025Updated 5 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆151May 10, 2025Updated 11 months ago
- performance engineering☆30Jul 11, 2024Updated last year
- ☆11Aug 23, 2023Updated 2 years ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆328Jun 10, 2025Updated 10 months ago
- A Triton-only attention backend for vLLM☆25Mar 17, 2026Updated last month
- Triton adapter for Ascend. Mirror of https://gitcode.com/ascend/triton-ascend☆119Apr 30, 2026Updated last week
- ☆18Jan 4, 2024Updated 2 years ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 3 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- High-performance LLM operator library built on TileLang.☆118Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,108Updated this week
- ☆157Mar 4, 2025Updated last year
- vLLM plugin for RBLN NPU☆49May 1, 2026Updated last week
- Benchmark SGLang on SLURM☆24Apr 20, 2026Updated 2 weeks ago
- ☆33Feb 3, 2025Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,242Apr 30, 2026Updated last week
- A sparse attention kernel supporting mix sparse patterns☆509Jan 18, 2026Updated 3 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆382Jul 10, 2025Updated 9 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 6 months ago
- 基于语义的中文文本关键词提取算法☆20Mar 24, 2021Updated 5 years ago
- 3rd party dependencies for DALI project☆11Apr 28, 2026Updated last week
- triton for dsa☆63Apr 14, 2026Updated 3 weeks ago
- study of cutlass☆22Nov 10, 2024Updated last year
- Demo for Qwen2.5-VL-3B-Instruct on Axera device.☆15Sep 3, 2025Updated 8 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆807Apr 6, 2025Updated last year
- Efficient and easy multi-instance LLM serving☆547Mar 12, 2026Updated last month
- ☆29May 31, 2025Updated 11 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A collection of models for TensorFlow Go☆12May 29, 2022Updated 3 years ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- AST interpreter with clang 5.0.0 and llvm 5.0.0☆14Dec 7, 2019Updated 6 years ago
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆21Mar 7, 2024Updated 2 years ago
- patches for huggingface transformers to save memory☆37Jun 2, 2025Updated 11 months ago
- A curated list for Efficient Large Language Models☆11Mar 25, 2024Updated 2 years ago
- ☆22May 5, 2025Updated last year