fla-org / flash-linear-attentionLinks
🚀 Efficient implementations of state-of-the-art linear attention models
☆3,937Updated this week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
Sorting:
- Muon is an optimizer for hidden layers in neural networks☆2,056Updated last week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆928Updated 8 months ago
- A PyTorch native platform for training generative AI models☆4,778Updated this week
- FlashInfer: Kernel Library for LLM Serving☆4,168Updated this week
- Puzzles for learning Triton☆2,143Updated last year
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,901Updated 3 months ago
- PyTorch native quantization and sparsity for training and inference☆2,543Updated this week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆785Updated 3 months ago
- Tile primitives for speedy kernels☆2,955Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆2,971Updated this week
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,282Updated last year
- Muon is Scalable for LLM Training☆1,372Updated 4 months ago
- Helpful tools and examples for working with flex-attention☆1,062Updated 2 weeks ago
- slime is an LLM post-training framework for RL Scaling.☆2,612Updated this week
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch☆1,533Updated last week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆1,973Updated last week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆4,054Updated this week
- Official PyTorch implementation for "Large Language Diffusion Models"☆3,333Updated 3 weeks ago
- Minimalistic large language model 3D-parallelism training☆2,351Updated last week
- Simple RL training for reasoning☆3,796Updated 4 months ago
- NanoGPT (124M) in 3 minutes☆3,911Updated last week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,368Updated 4 months ago
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,682Updated 7 months ago
- Efficient Triton Kernels for LLM Training☆5,892Updated this week
- A collection of AWESOME things about mixture-of-experts☆1,234Updated 11 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,629Updated last year
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,007Updated 8 months ago
- Official Repo for Open-Reasoner-Zero☆2,069Updated 6 months ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,040Updated this week
- A curated list for Efficient Large Language Models☆1,910Updated 5 months ago