☆111Mar 12, 2026Updated last month
Alternatives and similar repositories for kernels
Users that are interested in kernels are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆351Updated this week
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆13Nov 23, 2024Updated last year
- ☆20Sep 28, 2024Updated last year
- ☆325Updated this week
- Fast low-bit matmul kernels in Triton☆446Apr 27, 2026Updated last week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- TiledKernel is a code generation library based on macro kernels and memory hierarchy graph data structure.☆19May 12, 2024Updated last year
- Applied AI experiments and examples for PyTorch☆320Aug 22, 2025Updated 8 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Cataloging released Triton kernels.☆302Sep 9, 2025Updated 7 months ago
- extensible collectives library in triton☆98Mar 31, 2025Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆290Jun 5, 2024Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆193Jan 28, 2025Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆99Sep 19, 2025Updated 7 months ago
- Shared Middle-Layer for Triton Compilation☆331Dec 5, 2025Updated 4 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆108Jun 28, 2025Updated 10 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆265Jul 11, 2024Updated last year
- Implement Flash Attention using Cute.☆106Dec 17, 2024Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆600Aug 12, 2025Updated 8 months ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆981Updated this week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- Collection of kernels written in Triton language☆191Jan 27, 2026Updated 3 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆14Mar 8, 2025Updated last year
- 🎉My Collections of CUDA Kernels~☆11Jun 25, 2024Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- A Triton JIT runtime and ffi provider in C++☆33Apr 28, 2026Updated last week
- ☆52May 19, 2025Updated 11 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ☆20Oct 11, 2023Updated 2 years ago
- Triton implement of bi-directional (non-causal) linear attention☆75Mar 1, 2026Updated 2 months ago
- A unified programming framework for high and portable performance across FPGAs and GPUs☆11Mar 23, 2025Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- ☆33Feb 3, 2025Updated last year
- Awesome Triton Resources☆40Apr 27, 2025Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆280Jul 16, 2025Updated 9 months ago