fla-org / flash-linear-attentionLinks
π Efficient implementations of state-of-the-art linear attention models in Torch and Triton
β2,438Updated this week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,198Updated 10 months ago
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ661Updated this week
- Helpful tools and examples for working with flex-attentionβ802Updated last week
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β686Updated 2 months ago
- Tile primitives for speedy kernelsβ2,399Updated this week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ637Updated 2 weeks ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDAβ850Updated this week
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ559Updated 3 months ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ1,505Updated 2 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.β1,266Updated 2 weeks ago
- Puzzles for learning Tritonβ1,658Updated 6 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,560Updated 7 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ513Updated 2 weeks ago
- A simple and efficient Mamba implementation in pure PyTorch and MLX.β1,239Updated 5 months ago
- Ring attention implementation with flash attentionβ771Updated last week
- PyTorch native quantization and sparsity for training and inferenceβ2,072Updated this week
- A bibliography and survey of the papers surrounding o1β1,193Updated 6 months ago
- Minimalistic large language model 3D-parallelism trainingβ1,898Updated this week
- FlashInfer: Kernel Library for LLM Servingβ3,044Updated this week
- A curated list for Efficient Large Language Modelsβ1,684Updated last month
- A PyTorch native platform for training generative AI modelsβ3,868Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ1,225Updated this week
- Code for BLT research paperβ1,664Updated last week
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4β824Updated this week
- A collection of AWESOME things about mixture-of-expertsβ1,135Updated 5 months ago
- Muon is Scalable for LLM Trainingβ1,052Updated 2 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ876Updated last month
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blaβ¦β2,435Updated last week
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.β686Updated last month
- [TMLR 2024] Efficient Large Language Models: A Surveyβ1,161Updated 2 months ago