fla-org / flash-linear-attention
π Efficient implementations of state-of-the-art linear attention models in Torch and Triton
β1,926Updated this week
Alternatives and similar repositories for flash-linear-attention:
Users that are interested in flash-linear-attention are comparing it to the libraries listed below
- Helpful tools and examples for working with flex-attentionβ647Updated this week
- Puzzles for learning Tritonβ1,413Updated 3 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ502Updated 3 months ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDAβ746Updated this week
- Ring attention implementation with flash attentionβ677Updated this week
- Tile primitives for speedy kernelsβ2,060Updated this week
- Large Context Attentionβ684Updated 3 weeks ago
- FlashInfer: Kernel Library for LLM Servingβ2,111Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β514Updated this week
- A bibliography and survey of the papers surrounding o1β1,160Updated 3 months ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ514Updated last week
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,126Updated 7 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)β700Updated last month
- Annotated version of the Mamba paperβ473Updated 11 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decodingβ1,194Updated 4 months ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ772Updated this week
- Tutel MoE: An Optimized Mixture-of-Experts Implementationβ766Updated this week
- A simple and efficient Mamba implementation in pure PyTorch and MLX.β1,125Updated 2 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorchβ639Updated 2 months ago
- A PyTorch native library for large model trainingβ3,326Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUsβ¦β2,190Updated this week
- Collection of papers on state-space modelsβ575Updated 3 weeks ago
- Building blocks for foundation models.β448Updated last year
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.β582Updated 2 months ago
- Minimalistic large language model 3D-parallelism trainingβ1,483Updated this week
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)β968Updated this week
- π° Must-read papers and blogs on Speculative Decoding β‘οΈβ597Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Modelsβ1,341Updated 7 months ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptationβ717Updated 4 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Accelerationβ2,760Updated last week