Efficient Triton Kernels for LLM Training
☆6,162Updated this week
Alternatives and similar repositories for Liger-Kernel
Users that are interested in Liger-Kernel are comparing it to the libraries listed below
Sorting:
- FlashInfer: Kernel Library for LLM Serving☆5,009Updated this week
- A PyTorch native platform for training generative AI models☆5,084Updated this week
- Tile primitives for speedy kernels☆3,183Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,658Updated this week
- Fast and memory-efficient exact attention☆22,361Updated this week
- Development repository for the Triton language and compiler☆18,460Feb 22, 2026Updated last week
- Minimalistic large language model 3D-parallelism training☆2,569Feb 19, 2026Updated last week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- PyTorch native quantization and sparsity for training and inference☆2,696Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,339Updated this week
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,752Jul 18, 2025Updated 7 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,234Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,170Feb 21, 2026Updated last week
- Puzzles for learning Triton☆2,314Nov 18, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- Ongoing research training transformer models at scale☆15,242Feb 21, 2026Updated last week
- PyTorch native post-training library☆5,689Updated this week
- Distributed Compiler based on Triton for Parallel Systems