pytorch-labs / tritonbenchLinks
Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.
☆136Updated this week
Alternatives and similar repositories for tritonbench
Users that are interested in tritonbench are comparing it to the libraries listed below
Sorting:
- ☆63Updated this week
- extensible collectives library in triton☆86Updated 2 months ago
- ☆219Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆99Updated 3 weeks ago
- ☆81Updated 7 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆166Updated this week
- ☆212Updated 11 months ago
- Applied AI experiments and examples for PyTorch☆277Updated 3 weeks ago
- ☆105Updated 9 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 11 months ago
- Cataloging released Triton kernels.☆238Updated 5 months ago
- Fast low-bit matmul kernels in Triton☆322Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 10 months ago
- ☆96Updated 9 months ago
- DeeperGEMM: crazy optimized version☆69Updated last month
- ☆90Updated 5 months ago
- Collection of kernels written in Triton language☆128Updated 2 months ago
- ring-attention experiments☆144Updated 8 months ago
- ☆60Updated last month
- ☆157Updated last year
- ☆75Updated 5 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆201Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆77Updated last week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆94Updated 6 years ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- ☆114Updated 3 weeks ago
- ☆86Updated 2 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆90Updated 2 weeks ago
- Ahead of Time (AOT) Triton Math Library☆66Updated last week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆252Updated 7 months ago