π Efficient implementations for emerging model architectures
β4,999Apr 27, 2026Updated this week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β993Feb 5, 2026Updated 2 months ago
- FlashInfer: Kernel Library for LLM Servingβ5,544Updated this week
- π₯ A minimal training framework for scaling FLA modelsβ385Apr 22, 2026Updated last week
- Tile primitives for speedy kernelsβ3,336Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,928Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A PyTorch native platform for training generative AI modelsβ5,286Updated this week
- Efficient Triton Kernels for LLM Trainingβ6,315Updated this week
- Fast and memory-efficient exact attentionβ23,563Updated this week
- Ring attention implementation with flash attentionβ1,014Sep 10, 2025Updated 7 months ago
- Distributed Compiler based on Triton for Parallel Systemsβ1,420Apr 22, 2026Updated last week
- Helpful tools and examples for working with flex-attentionβ1,182Apr 13, 2026Updated 3 weeks ago
- Development repository for the Triton language and compilerβ19,087Updated this week
- Puzzles for learning Tritonβ2,404Apr 1, 2026Updated last month
- SGLang is a high-performance serving framework for large language models and multimodal models.β26,832Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- MoBA: Mixture of Block Attention for Long-Context LLMsβ2,108Apr 3, 2025Updated last year
- verl/HybridFlow: A Flexible and Efficient RL Post-Training Frameworkβ21,046Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β3,312Updated this week
- PyTorch native quantization and sparsity for training and inferenceβ2,807Updated this week
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Ruleβ558Mar 13, 2026Updated last month
- Understand and test language model architectures on synthetic tasks.β265Mar 22, 2026Updated last month
- Mamba SSM architectureβ18,118Apr 27, 2026Updated last week
- Ongoing research training transformer models at scaleβ16,203Updated this week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernelβ2,234Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Muon is Scalable for LLM Trainingβ1,469Aug 3, 2025Updated 9 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β834Mar 6, 2025Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Modelsβ344Feb 23, 2025Updated last year
- A Quirky Assortment of CuTe Kernelsβ955Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.β1,295Aug 28, 2025Updated 8 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,210Apr 8, 2026Updated 3 weeks ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asyβ¦β9,417Apr 27, 2026Updated last week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabiliβ¦β4,036Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebraβ9,663Apr 25, 2026Updated last week
- Deploy on Railway without the complexity - Free Credits Offer β’ AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Accelerated First Order Parallel Associative Scanβ197Jan 7, 2026Updated 3 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ795Apr 21, 2026Updated last week
- β107Mar 9, 2024Updated 2 years ago
- β130Feb 4, 2026Updated 3 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingβ7,144Apr 24, 2026Updated last week
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"β252Jun 6, 2025Updated 10 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β600Aug 12, 2025Updated 8 months ago