fla-org / flash-linear-attentionLinks
π Efficient implementations of state-of-the-art linear attention models
β3,404Updated last week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
Sorting:
- Muon is an optimizer for hidden layers in neural networksβ1,767Updated 2 months ago
- Tile primitives for speedy kernelsβ2,767Updated last week
- A PyTorch native platform for training generative AI modelsβ4,476Updated this week
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β877Updated 6 months ago
- Puzzles for learning Tritonβ2,008Updated 10 months ago
- Official PyTorch implementation for "Large Language Diffusion Models"β2,971Updated 2 weeks ago
- FlashInfer: Kernel Library for LLM Servingβ3,829Updated this week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ761Updated last month
- Helpful tools and examples for working with flex-attentionβ997Updated 3 weeks ago
- PyTorch native quantization and sparsity for training and inferenceβ2,384Updated this week
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,254Updated last year
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchβ1,462Updated 4 months ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ1,836Updated last month
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernelβ1,842Updated this week
- Minimalistic large language model 3D-parallelism trainingβ2,239Updated last month
- Muon is Scalable for LLM Trainingβ1,318Updated 2 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blaβ¦β2,755Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ1,793Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Accelerationβ3,282Updated 2 months ago
- NanoGPT (124M) in 3 minutesβ3,145Updated 2 months ago
- Implementing DeepSeek R1's GRPO algorithm from scratchβ1,587Updated 5 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,610Updated 11 months ago
- Code for BLT research paperβ1,985Updated 4 months ago
- A collection of AWESOME things about mixture-of-expertsβ1,209Updated 9 months ago
- A curated list for Efficient Large Language Modelsβ1,873Updated 3 months ago
- slime is an LLM post-training framework for RL Scaling.β2,023Updated this week
- [TMLR 2024] Efficient Large Language Models: A Surveyβ1,219Updated 3 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)β940Updated 9 months ago
- Simple RL training for reasoningβ3,754Updated 2 months ago
- Official Repo for Open-Reasoner-Zeroβ2,045Updated 4 months ago