fla-org / flash-linear-attentionLinks
π Efficient implementations of state-of-the-art linear attention models
β2,876Updated this week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
Sorting:
- Muon is an optimizer for hidden layers in neural networksβ988Updated this week
- Tile primitives for speedy kernelsβ2,501Updated this week
- Helpful tools and examples for working with flex-attentionβ865Updated 2 weeks ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ1,566Updated last month
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,227Updated 11 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ667Updated last month
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β720Updated 3 months ago
- Puzzles for learning Tritonβ1,747Updated 7 months ago
- FlashInfer: Kernel Library for LLM Servingβ3,349Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ1,391Updated this week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDAβ1,540Updated this week
- Muon is Scalable for LLM Trainingβ1,093Updated 3 months ago
- Code for BLT research paperβ1,725Updated last month
- A collection of AWESOME things about mixture-of-expertsβ1,159Updated 7 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blaβ¦β2,548Updated this week
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.β1,384Updated this week
- A PyTorch native platform for training generative AI modelsβ4,032Updated this week
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchβ1,402Updated last month
- Minimalistic large language model 3D-parallelism trainingβ2,012Updated this week
- Implementing DeepSeek R1's GRPO algorithm from scratchβ1,469Updated 2 months ago
- PyTorch native quantization and sparsity for training and inferenceβ2,168Updated this week
- Schedule-Free Optimization in PyTorchβ2,189Updated last month
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ563Updated 5 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Accelerationβ3,140Updated this week
- NanoGPT (124M) in 3 minutesβ2,774Updated 3 weeks ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,574Updated 8 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Modelsβ1,440Updated last year
- Official PyTorch implementation for "Large Language Diffusion Models"β2,530Updated 3 weeks ago
- A curated list for Efficient Large Language Modelsβ1,776Updated 3 weeks ago
- A bibliography and survey of the papers surrounding o1β1,205Updated 7 months ago