π₯ A minimal training framework for scaling FLA models
β350Nov 15, 2025Updated 3 months ago
Alternatives and similar repositories for flame
Users that are interested in flame are comparing it to the libraries listed below
Sorting:
- π Efficient implementations of state-of-the-art linear attention modelsβ4,428Updated this week
- β29Dec 31, 2025Updated 2 months ago
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on β¦β14Sep 18, 2025Updated 5 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizerβ237Jun 15, 2025Updated 8 months ago
- β129Jun 6, 2025Updated 8 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Seβ¦β67Apr 24, 2024Updated last year
- Here we will test various linear attention designs.β62Apr 25, 2024Updated last year
- Triton implement of bi-directional (non-causal) linear attentionβ70Feb 22, 2026Updated last week
- Awesome Triton Resourcesβ39Apr 27, 2025Updated 10 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruningβ140Updated this week
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling withoutβ¦β21Mar 15, 2025Updated 11 months ago
- β66Jul 8, 2025Updated 7 months ago
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β969Feb 5, 2026Updated 3 weeks ago
- β12Jan 29, 2021Updated 5 years ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's lβ¦β54Jan 12, 2026Updated last month
- HGRN2: Gated Linear RNNs with State Expansionβ56Aug 20, 2024Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Modelβ56Dec 4, 2024Updated last year
- Linear Attention Sequence Parallelism (LASP)β88Jun 4, 2024Updated last year
- Experiments on the impact of depth in transformers and SSMs.β40Oct 23, 2025Updated 4 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Ruleβ469Feb 17, 2026Updated last week
- Helpful tools and examples for working with flex-attentionβ1,136Feb 8, 2026Updated 2 weeks ago
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitzβ¦β14Oct 17, 2023Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weightsβ19Oct 9, 2022Updated 3 years ago
- Flash-Linear-Attention models beyond languageβ21Aug 28, 2025Updated 6 months ago
- Official code for the paper "Attention as a Hypernetwork"β48Jun 22, 2024Updated last year
- Ring attention implementation with flash attentionβ986Sep 10, 2025Updated 5 months ago
- FlexAttention w/ FlashAttention3 Supportβ27Oct 5, 2024Updated last year
- β52May 19, 2025Updated 9 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.β327Updated this week
- Fork of Flame repo for training of some new stuff in developmentβ19Feb 20, 2026Updated last week
- β122Feb 4, 2026Updated 3 weeks ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformerβ64Jul 30, 2023Updated 2 years ago
- A PyTorch native platform for training generative AI modelsβ5,098Updated this week
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)β24Jun 6, 2024Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modelingβ40Dec 2, 2023Updated 2 years ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"β250Jan 31, 2025Updated last year
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span β¦β14Aug 25, 2023Updated 2 years ago
- β11Oct 11, 2023Updated 2 years ago
- β44Nov 1, 2025Updated 4 months ago