fla-org / flash-linear-attentionLinks
π Efficient implementations of state-of-the-art linear attention models
β3,045Updated this week
Alternatives and similar repositories for flash-linear-attention
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
Sorting:
- Muon is an optimizer for hidden layers in neural networksβ1,547Updated last month
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β822Updated 5 months ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ1,673Updated last month
- Helpful tools and examples for working with flex-attentionβ938Updated last week
- Puzzles for learning Tritonβ1,925Updated 9 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ725Updated last week
- Tile primitives for speedy kernelsβ2,579Updated 2 weeks ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,241Updated last year
- FlashInfer: Kernel Library for LLM Servingβ3,571Updated this week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDAβ1,688Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ1,530Updated this week
- Minimalistic large language model 3D-parallelism trainingβ2,130Updated last month
- Muon is Scalable for LLM Trainingβ1,281Updated 2 weeks ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blaβ¦β2,645Updated last week
- PyTorch native quantization and sparsity for training and inferenceβ2,251Updated last week
- A PyTorch native platform for training generative AI modelsβ4,272Updated this week
- Implementing DeepSeek R1's GRPO algorithm from scratchβ1,537Updated 4 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,586Updated 9 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.β1,480Updated last week
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchβ1,439Updated 2 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Accelerationβ3,207Updated last month
- A collection of AWESOME things about mixture-of-expertsβ1,190Updated 8 months ago
- Ring attention implementation with flash attentionβ841Updated 2 weeks ago
- Official PyTorch implementation for "Large Language Diffusion Models"β2,763Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Modelsβ1,472Updated last year
- A bibliography and survey of the papers surrounding o1β1,209Updated 9 months ago
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.β715Updated 4 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 FP8/NVFP4/MXFP4β891Updated last week
- π° Must-read papers and blogs on Speculative Decoding β‘οΈβ890Updated last week
- NanoGPT (124M) in 3 minutesβ3,037Updated last month