vulus98 / Rethinking-attentionLinks
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
☆43Updated 10 months ago
Alternatives and similar repositories for Rethinking-attention
Users that are interested in Rethinking-attention are comparing it to the libraries listed below
Sorting:
- State Space Models☆70Updated last year
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆107Updated last year
- A repository for DenseSSMs☆88Updated last year
- ☆75Updated 8 months ago
- Simba☆213Updated last year
- ☆218Updated 8 months ago
- ☆47Updated last year
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆110Updated last week
- ☆67Updated 11 months ago
- Ofiicial Implementation for Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data☆64Updated last year
- Implementation of ViTaR: ViTAR: Vision Transformer with Any Resolution in PyTorch☆38Updated 11 months ago
- ☆137Updated last year
- Official repository of Polarity-aware Linear Attention for Vision Transformers (ICLR 2025)☆74Updated 5 months ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated last week
- Awesome list of papers that extend Mamba to various applications.☆138Updated 4 months ago
- PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model …☆77Updated 11 months ago
- Code Implementation of EfficientVMamba☆231Updated last year
- Minimal Mamba-2 implementation in PyTorch☆222Updated last year
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆228Updated this week
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆96Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆75Updated 10 months ago
- Official repository for CVPR24 Precognition Workshop Paper: VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotem…☆150Updated last year
- The official project website of "KernelWarehouse: Rethinking the Design of Dynamic Convolution" (KW for short, published in ICML 2024)☆101Updated last year
- [NeurIPS2023]Lightweight Vision Transformer with Bidirectional Interaction☆26Updated last year
- [ICLR 2025] Official Code Release for Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation☆47Updated 7 months ago
- ☆68Updated last year
- ☆29Updated last year
- This is the official code for paper: Token Summarisation for Efficient Vision Transformers via Graph-based Token Propagation☆31Updated last year
- Transformer model based on Kolmogorov–Arnold Network(KAN), which is an alternative of Multi-Layer Perceptron(MLP)☆28Updated 3 months ago