tspeterkim / paged-attention-minimalView external linksLinks
a minimal cache manager for PagedAttention, on top of llama3.
☆135Aug 26, 2024Updated last year
Alternatives and similar repositories for paged-attention-minimal
Users that are interested in paged-attention-minimal are comparing it to the libraries listed below
Sorting:
- Mixed precision training from scratch with Tensors and CUDA☆28May 14, 2024Updated last year
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,068Dec 30, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆41Nov 1, 2025Updated 3 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Nov 23, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆96Sep 19, 2025Updated 4 months ago
- ☆27Jan 8, 2024Updated 2 years ago
- Fastest kernels written from scratch☆533Sep 18, 2025Updated 4 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆287Jun 5, 2024Updated last year
- ☆52May 19, 2025Updated 8 months ago
- Applied AI experiments and examples for PyTorch☆315Aug 22, 2025Updated 5 months ago
- ☆17Jun 19, 2023Updated 2 years ago
- Cataloging released Triton kernels.☆294Sep 9, 2025Updated 5 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458May 30, 2025Updated 8 months ago
- A practical way of learning Swizzle☆36Feb 3, 2025Updated last year
- Triton-based Symmetric Memory operators and examples☆81Jan 15, 2026Updated last month
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆72Sep 8, 2024Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆486Jan 20, 2026Updated 3 weeks ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- A lightweight design for computation-communication overlap.☆219Jan 20, 2026Updated 3 weeks ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆898Updated this week
- Implementation of FlashAttention in PyTorch☆180Jan 12, 2025Updated last year
- ☆11Dec 22, 2024Updated last year
- BFloat16 Fused Adam Operator for PyTorch☆16Nov 16, 2024Updated last year
- libsmctrl论文的复现,添加了python端接口,可以在python端灵活调用接口来分配计算资源☆12May 21, 2024Updated last year
- 🎉My Collections of CUDA Kernels~☆11Jun 25, 2024Updated last year
- ☆10Feb 20, 2024Updated last year
- See https://github.com/cuda-mode/triton-index/ instead!☆11May 8, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- JAX implementation of GPTQ quantization algorithm☆10Jul 19, 2023Updated 2 years ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆167Aug 14, 2024Updated last year
- Parsers for CUDA binary files☆25Dec 29, 2023Updated 2 years ago
- Fast and memory efficient PyTorch implementation of the Perceiver with FlashAttention.☆31Nov 4, 2024Updated last year
- Fast low-bit matmul kernels in Triton☆429Feb 1, 2026Updated 2 weeks ago
- Standalone commandline CLI tool for compiling Triton kernels☆20Sep 13, 2024Updated last year
- ☆15Oct 30, 2025Updated 3 months ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Oct 16, 2023Updated 2 years ago
- Kernel Library Wheel for SGLang☆17Feb 9, 2026Updated last week