vulus98 / Rethinking-attentionLinks
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
☆43Updated 8 months ago
Alternatives and similar repositories for Rethinking-attention
Users that are interested in Rethinking-attention are comparing it to the libraries listed below
Sorting:
- State Space Models☆70Updated last year
- ☆47Updated last year
- Simba☆211Updated last year
- ☆72Updated 6 months ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆110Updated 3 weeks ago
- A repository for DenseSSMs☆88Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated 2 weeks ago
- ☆66Updated 10 months ago
- Official repository of Polarity-aware Linear Attention for Vision Transformers (ICLR 2025)☆69Updated 3 months ago
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆107Updated last year
- Awesome list of papers that extend Mamba to various applications.☆136Updated 2 months ago
- ☆137Updated last year
- PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model …☆73Updated 10 months ago
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆93Updated last year
- ☆215Updated 6 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆226Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- [ICLR 2025] Official Code Release for Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation☆45Updated 6 months ago
- Minimal Mamba-2 implementation in PyTorch☆218Updated last year
- Code Implementation of EfficientVMamba☆224Updated last year
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆117Updated 2 weeks ago
- Ofiicial Implementation for Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data☆63Updated last year
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆204Updated 3 weeks ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- The official GitHub page for the survey paper "A Survey of RWKV".☆29Updated 7 months ago
- A simpler Pytorch + Zeta Implementation of the paper: "SiMBA: Simplified Mamba-based Architecture for Vision and Multivariate Time series…☆28Updated 9 months ago
- Official repository for CVPR24 Precognition Workshop Paper: VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotem…☆147Updated last year
- [NeurIPS2023]Lightweight Vision Transformer with Bidirectional Interaction☆25Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated this week
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆119Updated 2 weeks ago