Deep-Learning-Profiling-Tools / triton-samples
☆13Updated 2 months ago
Alternatives and similar repositories for triton-samples
Users that are interested in triton-samples are comparing it to the libraries listed below
Sorting:
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆85Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆124Updated this week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆40Updated last month
- extensible collectives library in triton☆86Updated last month
- Personal solutions to the Triton Puzzles☆18Updated 9 months ago
- A bunch of kernels that might make stuff slower 😉☆40Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆70Updated this week
- ☆79Updated 6 months ago
- FlexAttention w/ FlashAttention3 Support☆26Updated 7 months ago
- Automatic differentiation for Triton Kernels☆11Updated last month
- ☆32Updated this week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆132Updated this week
- Make triton easier☆47Updated 11 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated 10 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated this week
- ☆27Updated 4 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆64Updated last year
- Collection of kernels written in Triton language☆122Updated last month
- ☆70Updated last week
- GPTQ inference TVM kernel☆38Updated last year
- ☆26Updated last year
- DeeperGEMM: crazy optimized version☆69Updated last week
- Ahead of Time (AOT) Triton Math Library☆63Updated this week
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆23Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆26Updated 4 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 8 months ago
- ☆69Updated last month
- PyTorch centric eager mode debugger☆47Updated 5 months ago
- High-Performance SGEMM on CUDA devices☆91Updated 3 months ago