Build compute kernels and load them from the Hub.
☆518Mar 20, 2026Updated this week
Alternatives and similar repositories for kernels
Users that are interested in kernels are comparing it to the libraries listed below
Sorting:
- 👷 Build compute kernels☆216Jan 27, 2026Updated last month
- A Quirky Assortment of CuTe Kernels☆861Updated this week
- Kernel sources for https://huggingface.co/kernels-community☆80Updated this week
- Hugging Face Jobs☆19Jul 11, 2025Updated 8 months ago
- ☆26Nov 18, 2025Updated 4 months ago
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- Efficient Triton Kernels for LLM Training☆6,216Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- [ICLR'25] Code for KaSA, an official implementation of "KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models"☆20Jan 16, 2025Updated last year
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆869Mar 9, 2026Updated last week
- Tile primitives for speedy kernels☆3,232Updated this week
- Automatically derive Python dunder methods for your Rust code☆25Jan 28, 2026Updated last month
- ☆207May 5, 2025Updated 10 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 6 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆330Mar 14, 2026Updated last week
- ☆52May 19, 2025Updated 10 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,116Aug 26, 2025Updated 6 months ago
- Applied AI experiments and examples for PyTorch☆319Aug 22, 2025Updated 6 months ago
- PyTorch native quantization and sparsity for training and inference☆2,730Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,630Updated this week
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆32Jul 2, 2025Updated 8 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆242Jun 15, 2025Updated 9 months ago
- Fast low-bit matmul kernels in Triton☆438Feb 1, 2026Updated last month
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,339Mar 9, 2026Updated last week
- Helpful tools and examples for working with flex-attention☆1,157Feb 8, 2026Updated last month
- ☆261Jul 11, 2024Updated last year
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,159Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,403Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆331Sep 25, 2025Updated 5 months ago
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,386Mar 11, 2026Updated last week
- ☆53Feb 24, 2026Updated 3 weeks ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 8 months ago
- 🔥 A minimal training framework for scaling FLA models☆357Nov 15, 2025Updated 4 months ago
- ☆14Dec 21, 2025Updated 3 months ago
- A pytorch quantization backend for optimum☆1,032Nov 21, 2025Updated 3 months ago