fla-org / flash-linear-attentionLinks
π Efficient implementations of state-of-the-art linear attention models
β3,517Updated this week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
Sorting:
- Muon is an optimizer for hidden layers in neural networksβ1,888Updated 3 months ago
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β903Updated 7 months ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ1,856Updated last month
- Puzzles for learning Tritonβ2,036Updated 11 months ago
- A PyTorch native platform for training generative AI modelsβ4,561Updated this week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ772Updated 2 months ago
- Helpful tools and examples for working with flex-attentionβ1,020Updated last week
- Tile primitives for speedy kernelsβ2,821Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blaβ¦β2,834Updated this week
- PyTorch native quantization and sparsity for training and inferenceβ2,438Updated this week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernelβ1,891Updated this week
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,263Updated last year
- FlashInfer: Kernel Library for LLM Servingβ3,952Updated this week
- Muon is Scalable for LLM Trainingβ1,336Updated 2 months ago
- Official PyTorch implementation for "Large Language Diffusion Models"β3,079Updated last week
- NanoGPT (124M) in 3 minutesβ3,565Updated last week
- Implementing DeepSeek R1's GRPO algorithm from scratchβ1,621Updated 6 months ago
- Minimalistic large language model 3D-parallelism trainingβ2,267Updated last month
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ3,658Updated this week
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchβ1,474Updated last week
- slime is an LLM post-training framework for RL Scaling.β2,170Updated last week
- Awesome LLM compression research papers and tools.β1,690Updated 3 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Accelerationβ3,318Updated 3 months ago
- A collection of AWESOME things about mixture-of-expertsβ1,216Updated 10 months ago
- Code for BLT research paperβ1,995Updated 5 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMsβ1,941Updated 6 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,610Updated 11 months ago
- Ring attention implementation with flash attentionβ901Updated last month
- Training Large Language Model to Reason in a Continuous Latent Spaceβ1,297Updated 2 months ago
- A curated list for Efficient Large Language Modelsβ1,874Updated 4 months ago