vulus98 / Rethinking-attention
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
☆40Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for Rethinking-attention
- State Space Models☆61Updated 6 months ago
- A repository for DenseSSMs☆88Updated 6 months ago
- ☆41Updated 7 months ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆49Updated this week
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆78Updated 2 months ago
- Implementation of ViTaR: ViTAR: Vision Transformer with Any Resolution in PyTorch☆24Updated this week
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆83Updated this week
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆47Updated 2 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆199Updated 5 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆102Updated 3 months ago
- ☆37Updated last month
- Minimal Mamba-2 implementation in PyTorch☆127Updated 4 months ago
- Second Generation of the MAMBA Software☆28Updated last month
- A simpler Pytorch + Zeta Implementation of the paper: "SiMBA: Simplified Mamba-based Architecture for Vision and Multivariate Time series…☆27Updated this week
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆71Updated 8 months ago
- Introduce Mamba2 to Vision.☆91Updated 2 months ago
- Collect papers about Mamba (a selective state space model).☆13Updated 3 months ago
- Transformer model based on Kolmogorov–Arnold Network(KAN), which is an alternative of Multi-Layer Perceptron(MLP)☆24Updated 3 weeks ago
- ☆60Updated 2 weeks ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆100Updated 11 months ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆44Updated last year
- Awesome list of papers that extend Mamba to various applications.☆127Updated last month
- ☆76Updated 5 months ago
- Simba☆182Updated 7 months ago
- Ofiicial Implementation for Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data☆49Updated 4 months ago
- First-principle implementations of groundbreaking AI algorithms using a wide range of deep learning frameworks, accompanied by supporting…☆66Updated 3 weeks ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆67Updated this week
- Pan-Mamba: Effective Pan-Sharpening with State Space Model☆80Updated 7 months ago
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆23Updated this week
- Code Implementation of EfficientVMamba☆183Updated 6 months ago