fla-org / flash-linear-attentionLinks
🚀 Efficient implementations of state-of-the-art linear attention models
☆4,352Updated last week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
Sorting:
- Muon is an optimizer for hidden layers in neural networks☆2,267Updated 3 weeks ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆797Updated 5 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆964Updated this week
- Puzzles for learning Triton☆2,283Updated last year
- FlashInfer: Kernel Library for LLM Serving☆4,935Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,058Updated 5 months ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,318Updated last year
- Helpful tools and examples for working with flex-attention☆1,118Updated 3 weeks ago
- Minimalistic large language model 3D-parallelism training☆2,544Updated 2 months ago
- Muon is Scalable for LLM Training☆1,426Updated 6 months ago
- slime is an LLM post-training framework for RL Scaling.☆3,668Updated this week
- A PyTorch native platform for training generative AI models☆5,045Updated this week
- Tile primitives for speedy kernels☆3,120Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,094Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,152Updated this week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,120Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,668Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,044Updated 10 months ago
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,762Updated 9 months ago
- Official PyTorch implementation for "Large Language Diffusion Models"☆3,554Updated 2 months ago
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch☆1,924Updated 2 weeks ago
- NanoGPT (124M) in 2 minutes☆4,589Updated last week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,436Updated 6 months ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,117Updated 2 weeks ago
- Code for BLT research paper☆2,027Updated 3 months ago
- A curated list for Efficient Large Language Models☆1,950Updated 7 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆8,949Updated this week
- My learning notes for ML SYS.☆5,306Updated last week
- Awesome LLM compression research papers and tools.☆1,771Updated 3 months ago
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,620Updated this week