zinccat / Awesome-Triton-Kernels
Collection of kernels written in Triton language
☆118Updated this week
Alternatives and similar repositories for Awesome-Triton-Kernels:
Users that are interested in Awesome-Triton-Kernels are comparing it to the libraries listed below
- Cataloging released Triton kernels.☆213Updated 3 months ago
- Fast low-bit matmul kernels in Triton☆279Updated last week
- Applied AI experiments and examples for PyTorch☆256Updated 3 weeks ago
- ☆195Updated 2 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆108Updated last week
- Fast Hadamard transform in CUDA, with a PyTorch interface☆166Updated 10 months ago
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 8 months ago
- ☆76Updated 5 months ago
- extensible collectives library in triton☆84Updated last week
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆236Updated last month
- ring-attention experiments☆129Updated 5 months ago
- ☆103Updated 7 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆210Updated 4 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆103Updated 8 months ago
- ☆194Updated 8 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆190Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆254Updated last week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆156Updated 2 weeks ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆245Updated 5 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆78Updated 5 months ago
- Distributed Triton for Parallel Systems☆146Updated this week
- ☆157Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆72Updated 7 months ago
- Fastest kernels written from scratch☆213Updated last week
- The simplest but fast implementation of matrix multiplication in CUDA.☆34Updated 8 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆203Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆259Updated 10 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆345Updated 2 weeks ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆303Updated 9 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆98Updated this week