HazyResearch / ThunderKittensLinks
Tile primitives for speedy kernels
☆2,821Updated last week
Alternatives and similar repositories for ThunderKittens
Users that are interested in ThunderKittens are comparing it to the libraries listed below
Sorting:
- Puzzles for learning Triton☆2,036Updated 11 months ago
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆1,891Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆945Updated 9 months ago
- GPU programming related news and material links☆1,741Updated last month
- FlashInfer: Kernel Library for LLM Serving☆3,911Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,438Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,834Updated this week
- Fast CUDA matrix multiplication from scratch☆908Updated last month
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆3,658Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,856Updated last month
- 🚀 Efficient implementations of state-of-the-art linear attention models☆3,517Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆578Updated 2 months ago
- A Quirky Assortment of CuTe Kernels☆627Updated last week
- A PyTorch native platform for training generative AI models☆4,561Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,173Updated 3 weeks ago
- What would you do with 1000 H100s...☆1,113Updated last year
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆612Updated last week
- Minimalistic large language model 3D-parallelism training☆2,267Updated last month
- Helpful tools and examples for working with flex-attention☆1,020Updated this week
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.☆745Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆904Updated last month
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆698Updated 2 months ago
- Pipeline Parallelism for PyTorch☆780Updated last year
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,610Updated this week
- Building blocks for foundation models.☆566Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,145Updated last month
- A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. …☆1,443Updated last week
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,415Updated this week
- kernels, of the mega variety☆586Updated 3 weeks ago
- An open-source efficient deep learning framework/compiler, written in python.☆731Updated last month