Building blocks for foundation models.
☆608Jan 3, 2024Updated 2 years ago
Alternatives and similar repositories for aisys-building-blocks
Users that are interested in aisys-building-blocks are comparing it to the libraries listed below
Sorting:
- Tile primitives for speedy kernels☆3,218Mar 6, 2026Updated last week
- GPU programming related news and material links☆2,028Updated this week
- Understand and test language model architectures on synthetic tasks.☆259Feb 24, 2026Updated 2 weeks ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- An ML Systems Onboarding list☆1,003Feb 19, 2026Updated 3 weeks ago
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆329Updated this week
- Cataloging released Triton kernels.☆298Sep 9, 2025Updated 6 months ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆344Dec 28, 2024Updated last year
- ☆51Jan 28, 2024Updated 2 years ago
- Applied AI experiments and examples for PyTorch☆319Aug 22, 2025Updated 6 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,528Updated this week
- Fast low-bit matmul kernels in Triton☆436Feb 1, 2026Updated last month
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- Material for gpu-mode lectures☆5,818Feb 1, 2026Updated last month
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆107Jun 28, 2025Updated 8 months ago
- Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.☆446Mar 6, 2026Updated last week
- A Quirky Assortment of CuTe Kernels☆849Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,380Feb 13, 2026Updated last month
- Helpful tools and examples for working with flex-attention☆1,153Feb 8, 2026Updated last month
- Puzzles for learning Triton☆2,327Nov 18, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆101Sep 8, 2025Updated 6 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆852Updated this week
- Efficient Triton Kernels for LLM Training☆6,204Updated this week
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆249Jun 6, 2025Updated 9 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,101Updated this week
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 10 months ago
- Accelerated First Order Parallel Associative Scan☆195Jan 7, 2026Updated 2 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,085Dec 30, 2024Updated last year
- A bibliography and survey of the papers surrounding o1☆1,212Nov 16, 2024Updated last year
- What would you do with 1000 H100s...☆1,157Jan 10, 2024Updated 2 years ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆597Aug 12, 2025Updated 7 months ago
- A PyTorch native platform for training generative AI models☆5,139Updated this week
- ☆65Apr 26, 2025Updated 10 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆788Updated this week
- Development repository for the Triton language and compiler☆18,573Mar 7, 2026Updated last week
- ☆305Updated this week
- ☆260Jul 11, 2024Updated last year