BlinkDL / LinearAttentionArena
Here we will test various linear attention designs.
☆60Updated 11 months ago
Alternatives and similar repositories for LinearAttentionArena:
Users that are interested in LinearAttentionArena are comparing it to the libraries listed below
- Using FlexAttention to compute attention with different masking patterns☆42Updated 6 months ago
- Stick-breaking attention☆49Updated last week
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆76Updated last week
- ☆47Updated last year
- ☆33Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆53Updated 7 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆26Updated 11 months ago
- ☆52Updated 8 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆30Updated 9 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated 9 months ago
- Fast and memory-efficient exact attention☆67Updated 3 weeks ago
- ☆74Updated 7 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- DPO, but faster 🚀☆40Updated 3 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆44Updated 8 months ago
- 🔥 A minimal training framework for scaling FLA models☆82Updated this week
- ☆63Updated last month
- A repository for research on medium sized language models.☆76Updated 10 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated 11 months ago
- ☆36Updated last week
- ☆36Updated last month
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 7 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆58Updated 2 months ago
- GoldFinch and other hybrid transformer components☆45Updated 8 months ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆65Updated 3 months ago
- Triton implement of bi-directional (non-causal) linear attention☆44Updated last month
- ☆30Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆46Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated last month