NVIDIA / tilusLinks
Tilus is a tile-level kernel programming language, implemented in Python.
☆115Updated 2 weeks ago
Alternatives and similar repositories for tilus
Users that are interested in tilus are comparing it to the libraries listed below
Sorting:
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer(WIP) for Triton Kernels☆144Updated this week
- ☆86Updated 9 months ago
- A Quirky Assortment of CuTe Kernels☆407Updated this week
- An experimental CPU backend for Triton☆143Updated 2 months ago
- kernels, of the mega variety☆472Updated 2 months ago
- extensible collectives library in triton☆88Updated 4 months ago
- High-Performance SGEMM on CUDA devices☆97Updated 7 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆94Updated last month
- AI Tensor Engine for ROCm☆254Updated this week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆260Updated this week
- Fast low-bit matmul kernels in Triton☆349Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆204Updated last week
- ☆232Updated this week
- Fastest kernels written from scratch☆314Updated 4 months ago
- ☆42Updated 3 months ago
- Collection of kernels written in Triton language☆145Updated 4 months ago
- MLIR-based partitioning system☆120Updated last week
- ☆111Updated 5 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆113Updated last year
- Ahead of Time (AOT) Triton Math Library☆75Updated this week
- DeeperGEMM: crazy optimized version☆71Updated 3 months ago
- ☆33Updated last month
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆61Updated 3 weeks ago
- Cataloging released Triton kernels.☆252Updated 7 months ago
- ☆61Updated 3 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆74Updated this week
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆181Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆80Updated last week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆211Updated 2 weeks ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆44Updated 5 months ago