fla-org / flash-linear-attentionLinks
π Efficient implementations of state-of-the-art linear attention models
β2,987Updated this week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
Sorting:
- Muon is an optimizer for hidden layers in neural networksβ1,390Updated 3 weeks ago
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β731Updated 4 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ673Updated last month
- Puzzles for learning Tritonβ1,801Updated 8 months ago
- Helpful tools and examples for working with flex-attentionβ904Updated 2 weeks ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ1,619Updated 3 weeks ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,239Updated last year
- Tile primitives for speedy kernelsβ2,541Updated this week
- Muon is Scalable for LLM Trainingβ1,223Updated 4 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,579Updated 9 months ago
- A PyTorch native platform for training generative AI modelsβ4,125Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blaβ¦β2,587Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ1,472Updated this week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDAβ1,629Updated this week
- FlashInfer: Kernel Library for LLM Servingβ3,448Updated this week
- Code for BLT research paperβ1,760Updated 2 months ago
- Implementing DeepSeek R1's GRPO algorithm from scratchβ1,496Updated 3 months ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ567Updated 5 months ago
- PyTorch native quantization and sparsity for training and inferenceβ2,219Updated this week
- Minimalistic large language model 3D-parallelism trainingβ2,068Updated 3 weeks ago
- A bibliography and survey of the papers surrounding o1β1,207Updated 8 months ago
- Official PyTorch implementation for "Large Language Diffusion Models"β2,630Updated last month
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.β1,439Updated this week
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptationβ818Updated 10 months ago
- A collection of AWESOME things about mixture-of-expertsβ1,177Updated 7 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMsβ1,846Updated 3 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Modelsβ1,568Updated last year
- Ring attention implementation with flash attentionβ828Updated last week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Accelerationβ3,181Updated 2 weeks ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek/Kimi-K2/Qwen3 FP8/FP4β870Updated last week