vulus98 / Rethinking-attention
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
☆43Updated 3 months ago
Alternatives and similar repositories for Rethinking-attention:
Users that are interested in Rethinking-attention are comparing it to the libraries listed below
- A repository for DenseSSMs☆87Updated 11 months ago
- State Space Models☆67Updated 10 months ago
- ☆47Updated 11 months ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆51Updated last month
- ☆53Updated last month
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆100Updated 6 months ago
- A simpler Pytorch + Zeta Implementation of the paper: "SiMBA: Simplified Mamba-based Architecture for Vision and Multivariate Time series…☆27Updated 4 months ago
- This is the official code for paper: Token Summarisation for Efficient Vision Transformers via Graph-based Token Propagation☆27Updated last year
- [NeurIPS2023]Lightweight Vision Transformer with Bidirectional Interaction☆23Updated last year
- The official GitHub page for the survey paper "A Survey of RWKV".☆24Updated 2 months ago
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆43Updated last week
- Implementation of ViTaR: ViTAR: Vision Transformer with Any Resolution in PyTorch☆32Updated 4 months ago
- Awesome list of papers that extend Mamba to various applications.☆132Updated 3 months ago
- Simba☆202Updated 11 months ago
- [ICLR 2025] Official Code Release for Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation☆40Updated 3 weeks ago
- The official Pytorch implementation of the paper "Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT …☆34Updated last year
- ☆35Updated 8 months ago
- Trainable Highly-expressive Activation Functions. ECCV 2024☆38Updated 3 weeks ago
- ☆65Updated 5 months ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆100Updated last month
- Scattering Vision Transformer☆50Updated last year
- Transformer model based on Kolmogorov–Arnold Network(KAN), which is an alternative of Multi-Layer Perceptron(MLP)☆27Updated this week
- ☆59Updated last year
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆63Updated 3 months ago
- Ofiicial Implementation for Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data☆59Updated 8 months ago
- ☆24Updated 5 months ago
- (NeurIPS 2023) PyTorch implementation of "Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation"☆18Updated 5 months ago
- ☆43Updated 2 years ago
- PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model …☆55Updated 4 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆216Updated 9 months ago