WailordHe / DenseSSMLinks
A repository for DenseSSMs
☆88Updated last year
Alternatives and similar repositories for DenseSSM
Users that are interested in DenseSSM are comparing it to the libraries listed below
Sorting:
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 8 months ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated 3 weeks ago
- ☆105Updated last year
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆110Updated 3 weeks ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆55Updated 2 years ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆64Updated 2 years ago
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆31Updated 5 months ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆63Updated 2 years ago
- State Space Models☆70Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- ☆15Updated 3 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated 11 months ago
- Awesome list of papers that extend Mamba to various applications.☆138Updated 3 months ago
- ☆47Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆132Updated last week
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆57Updated last year
- Mixture of Attention Heads☆49Updated 2 years ago
- ☆183Updated last year
- ☆16Updated 2 years ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆227Updated last year
- ☆197Updated last year
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆99Updated 4 months ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆25Updated 5 months ago
- The official GitHub page for the survey paper "A Survey of RWKV".☆29Updated 8 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- ☆148Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆106Updated last week
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆32Updated 2 years ago