Deep-Learning-Profiling-Tools / triton-samplesLinks
☆14Updated 7 months ago
Alternatives and similar repositories for triton-samples
Users that are interested in triton-samples are comparing it to the libraries listed below
Sorting:
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆258Updated this week
- ☆92Updated 11 months ago
- Collection of kernels written in Triton language☆156Updated 6 months ago
- Cataloging released Triton kernels.☆261Updated last month
- Automatic differentiation for Triton Kernels☆11Updated 2 months ago
- ☆240Updated this week
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆99Updated last week
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆160Updated this week
- A bunch of kernels that might make stuff slower 😉☆61Updated last week
- Fast low-bit matmul kernels in Triton☆381Updated 3 weeks ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆98Updated 3 months ago
- High-Performance SGEMM on CUDA devices☆107Updated 8 months ago
- extensible collectives library in triton☆89Updated 6 months ago
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆67Updated 3 weeks ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆45Updated 2 months ago
- Applied AI experiments and examples for PyTorch☆299Updated last month
- ring-attention experiments☆153Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated last month
- a minimal cache manager for PagedAttention, on top of llama3.☆123Updated last year
- ☆240Updated last year
- An experimental CPU backend for Triton☆153Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆116Updated last year
- Github mirror of trition-lang/triton repo.☆84Updated last week
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆83Updated last week
- Fastest kernels written from scratch☆374Updated last month
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆389Updated this week
- Examples and exercises from the book Programming Massively Parallel Processors - A Hands-on Approach. David B. Kirk and Wen-mei W. Hwu (T…☆74Updated 4 years ago
- ☆28Updated 9 months ago
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆68Updated 2 weeks ago
- Ahead of Time (AOT) Triton Math Library☆79Updated this week