berlino / gated_linear_attention
☆102Updated last year
Alternatives and similar repositories for gated_linear_attention:
Users that are interested in gated_linear_attention are comparing it to the libraries listed below
- ☆52Updated 8 months ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆60Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated 2 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated 11 months ago
- 🔥 A minimal training framework for scaling FLA models☆92Updated last week
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆99Updated 9 months ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆82Updated 2 years ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆80Updated this week
- Sparse Backpropagation for Mixture-of-Expert Training☆28Updated 9 months ago
- Here we will test various linear attention designs.☆60Updated 11 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆80Updated 6 months ago
- ☆73Updated 2 weeks ago
- ☆47Updated last year
- Mixture of Attention Heads☆43Updated 2 years ago
- Stick-breaking attention☆49Updated 2 weeks ago
- HGRN2: Gated Linear RNNs with State Expansion☆53Updated 7 months ago
- ☆141Updated last year
- ☆30Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆151Updated last year
- Low-bit optimizers for PyTorch☆125Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆81Updated 4 months ago
- Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆64Updated 8 months ago
- Triton implement of bi-directional (non-causal) linear attention☆44Updated last month
- A repository for DenseSSMs☆87Updated 11 months ago
- ☆20Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 2 months ago
- ☆72Updated last week
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆46Updated last month
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆26Updated 11 months ago
- ☆19Updated 2 weeks ago