fla-org / flash-linear-attentionLinks
🚀 Efficient implementations of state-of-the-art linear attention models
☆4,282Updated this week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
Sorting:
- Muon is an optimizer for hidden layers in neural networks☆2,231Updated this week
- Puzzles for learning Triton☆2,246Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆956Updated 10 months ago
- A PyTorch native platform for training generative AI models☆4,972Updated this week
- Helpful tools and examples for working with flex-attention☆1,112Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,092Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,631Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,991Updated 4 months ago
- FlashInfer: Kernel Library for LLM Serving☆4,707Updated this week
- Muon is Scalable for LLM Training☆1,407Updated 5 months ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,311Updated last year
- Tile primitives for speedy kernels☆3,096Updated last week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆793Updated 5 months ago
- Minimalistic large language model 3D-parallelism training☆2,422Updated last month
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,084Updated 2 weeks ago
- Official PyTorch implementation for "Large Language Diffusion Models"☆3,496Updated 2 months ago
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch☆1,887Updated 2 weeks ago
- slime is an LLM post-training framework for RL Scaling.☆3,466Updated this week
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,747Updated 9 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆4,739Updated this week
- Code for BLT research paper☆2,026Updated 2 months ago
- Ring attention implementation with flash attention☆967Updated 4 months ago
- NanoGPT (124M) in 3 minutes☆4,149Updated last week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,047Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,420Updated 6 months ago
- Awesome LLM compression research papers and tools.☆1,759Updated 2 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,032Updated 9 months ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆4,909Updated last month
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,096Updated this week
- A collection of AWESOME things about mixture-of-experts☆1,255Updated last year