π Efficient implementations of state-of-the-art linear attention models
β4,428Feb 26, 2026Updated this week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
Sorting:
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β969Feb 5, 2026Updated 3 weeks ago
- FlashInfer: Kernel Library for LLM Servingβ5,057Updated this week
- Tile primitives for speedy kernelsβ3,183Feb 24, 2026Updated last week
- A PyTorch native platform for training generative AI modelsβ5,098Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,284Updated this week
- π₯ A minimal training framework for scaling FLA modelsβ352Nov 15, 2025Updated 3 months ago
- Efficient Triton Kernels for LLM Trainingβ6,162Updated this week
- Ring attention implementation with flash attentionβ986Sep 10, 2025Updated 5 months ago
- Fast and memory-efficient exact attentionβ22,361Updated this week
- Helpful tools and examples for working with flex-attentionβ1,136Feb 8, 2026Updated 3 weeks ago
- Distributed Compiler based on Triton for Parallel Systemsβ1,371Feb 13, 2026Updated 2 weeks ago
- PyTorch native quantization and sparsity for training and inferenceβ2,707Updated this week
- Puzzles for learning Tritonβ2,314Nov 18, 2024Updated last year
- Development repository for the Triton language and compilerβ18,501Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMsβ19,519Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.β23,905Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β3,176Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMsβ2,073Apr 3, 2025Updated 11 months ago
- Mamba SSM architectureβ17,257Feb 18, 2026Updated last week
- Ongoing research training transformer models at scaleβ15,461Updated this week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernelβ2,145Feb 23, 2026Updated last week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabiliβ¦β3,919Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β595Aug 12, 2025Updated 6 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)β9,037Feb 21, 2026Updated last week
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Modelsβ341Feb 23, 2025Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β816Mar 6, 2025Updated 11 months ago
- Muon is Scalable for LLM Trainingβ1,440Aug 3, 2025Updated 7 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.β1,261Aug 28, 2025Updated 6 months ago
- CUDA Templates and Python DSLs for High-Performance Linear Algebraβ9,315Updated this week
- Minimalistic large language model 3D-parallelism trainingβ2,579Feb 19, 2026Updated last week
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingβ6,206Updated this week
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Ruleβ477Feb 17, 2026Updated 2 weeks ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,190Sep 30, 2025Updated 5 months ago
- Understand and test language model architectures on synthetic tasks.β254Feb 24, 2026Updated last week
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ650Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.β10,353Feb 20, 2026Updated last week
- A Quirky Assortment of CuTe Kernelsβ814Feb 23, 2026Updated last week
- Fast low-bit matmul kernels in Tritonβ433Feb 1, 2026Updated last month
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ644Jan 15, 2026Updated last month