☆105Nov 7, 2024Updated last year
Alternatives and similar repositories for kernels
Users that are interested in kernels are comparing it to the libraries listed below
Sorting:
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆327Updated this week
- Fast low-bit matmul kernels in Triton☆436Feb 1, 2026Updated last month
- ☆20Sep 28, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆301Updated this week
- Applied AI experiments and examples for PyTorch☆319Aug 22, 2025Updated 6 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Nov 23, 2024Updated last year
- Cataloging released Triton kernels.☆295Sep 9, 2025Updated 5 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆288Jun 5, 2024Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆194Jan 28, 2025Updated last year
- Implement Flash Attention using Cute.☆101Dec 17, 2024Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 8 months ago
- TiledKernel is a code generation library based on macro kernels and memory hierarchy graph data structure.☆19May 12, 2024Updated last year
- ☆20Oct 11, 2023Updated 2 years ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- extensible collectives library in triton☆96Mar 31, 2025Updated 11 months ago
- ☆262Jul 11, 2024Updated last year
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- Collection of kernels written in Triton language☆178Jan 27, 2026Updated last month
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆79Aug 12, 2024Updated last year
- Shared Middle-Layer for Triton Compilation☆329Dec 5, 2025Updated 2 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Aug 12, 2025Updated 6 months ago
- A Triton JIT runtime and ffi provider in C++☆32Updated this week
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Jan 28, 2026Updated last month
- FlagGems is an operator library for large language models implemented in the Triton Language.☆909Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆70Feb 22, 2026Updated last week
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- ☆34Feb 3, 2025Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- 🎉My Collections of CUDA Kernels~☆11Jun 25, 2024Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Apr 30, 2023Updated 2 years ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆11Oct 11, 2023Updated 2 years ago