tile-ai / tilelangLinks
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
☆1,608Updated this week
Alternatives and similar repositories for tilelang
Users that are interested in tilelang are comparing it to the libraries listed below
Sorting:
- Distributed Compiler based on Triton for Parallel Systems☆1,090Updated last week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆1,773Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,106Updated 2 weeks ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆504Updated 9 months ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆668Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆918Updated 8 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆667Updated last month
- Fast CUDA matrix multiplication from scratch☆834Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆747Updated 6 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆557Updated 2 weeks ago
- FlashInfer: Kernel Library for LLM Serving☆3,723Updated this week
- Tile primitives for speedy kernels☆2,672Updated last week
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.☆725Updated 4 months ago
- A Quirky Assortment of CuTe Kernels☆450Updated this week
- Perplexity GPU Kernels☆458Updated last month
- A throughput-oriented high-performance serving framework for LLMs☆886Updated last month
- A PyTorch Native LLM Training Framework☆863Updated 2 months ago
- Materials for learning SGLang☆562Updated last week
- kernels, of the mega variety☆486Updated 3 months ago
- Puzzles for learning Triton☆1,978Updated 9 months ago
- Fastest kernels written from scratch☆323Updated 5 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆414Updated 3 months ago
- LLM KV cache compression made easy☆604Updated this week
- flash attention tutorial written in python, triton, cuda, cutlass☆416Updated 3 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆575Updated last month
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆355Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆895Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆376Updated 11 months ago
- Zero Bubble Pipeline Parallelism☆424Updated 4 months ago
- Ring attention implementation with flash attention☆864Updated last month