facebookexperimental / tritonLinks
Github mirror of trition-lang/triton repo.
☆86Updated this week
Alternatives and similar repositories for triton
Users that are interested in triton are comparing it to the libraries listed below
Sorting:
- ☆141Updated 9 months ago
- ☆92Updated 11 months ago
- ☆241Updated last year
- ☆150Updated 5 months ago
- extensible collectives library in triton☆89Updated 6 months ago
- A lightweight design for computation-communication overlap.☆181Updated 2 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆264Updated this week
- TVM FFI☆79Updated this week
- ☆153Updated last year
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated last year
- ☆83Updated 2 years ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆186Updated 8 months ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆89Updated 3 weeks ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 3 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆138Updated last month
- Shared Middle-Layer for Triton Compilation☆292Updated 2 weeks ago
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆69Updated last month
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆116Updated last year
- DeeperGEMM: crazy optimized version☆72Updated 5 months ago
- ☆109Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆125Updated 4 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- ☆100Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 3 months ago
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆73Updated this week
- An experimental CPU backend for Triton☆153Updated last week
- ☆75Updated 4 years ago
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated last month