fla-org / flash-linear-attention
π Efficient implementations of state-of-the-art linear attention models in Torch and Triton
β2,144Updated this week
Alternatives and similar repositories for flash-linear-attention:
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
- Helpful tools and examples for working with flex-attentionβ695Updated this week
- Puzzles for learning Tritonβ1,527Updated 4 months ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ948Updated 2 weeks ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,144Updated 8 months ago
- A bibliography and survey of the papers surrounding o1β1,182Updated 4 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ506Updated 4 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,523Updated 4 months ago
- Minimalistic large language model 3D-parallelism trainingβ1,701Updated this week
- FlashInfer: Kernel Library for LLM Servingβ2,439Updated this week
- Tile primitives for speedy kernelsβ2,170Updated this week
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ521Updated 2 weeks ago
- Ring attention implementation with flash attentionβ714Updated 3 weeks ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ535Updated last month
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.β1,057Updated this week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDAβ769Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decodingβ1,216Updated 2 weeks ago
- Building blocks for foundation models.β464Updated last year
- A PyTorch native library for large model trainingβ3,470Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β524Updated last month
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorchβ651Updated 3 months ago
- Large Context Attentionβ693Updated 2 months ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptationβ740Updated 5 months ago
- π° Must-read papers and blogs on Speculative Decoding β‘οΈβ654Updated this week
- Annotated version of the Mamba paperβ475Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUsβ¦β2,293Updated this week
- Collection of papers on state-space modelsβ583Updated 3 weeks ago