radarFudan / mambaLinks
☆18Updated 10 months ago
Alternatives and similar repositories for mamba
Users that are interested in mamba are comparing it to the libraries listed below
Sorting:
- A repository for DenseSSMs☆88Updated last year
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆204Updated 3 weeks ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆110Updated 3 weeks ago
- Awesome list of papers that extend Mamba to various applications.☆136Updated 2 months ago
- A More Fair and Comprehensive Comparison between KAN and MLP☆172Updated last year
- ☆72Updated 7 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆104Updated last week
- Unofficial Implementation of Selective Attention Transformer☆17Updated 10 months ago
- Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"☆86Updated 3 weeks ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆56Updated last year
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆36Updated 9 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆186Updated 2 weeks ago
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆56Updated last year
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- This is a simple torch implementation of the high performance Multi-Query Attention☆16Updated 2 years ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- The official repository for HyperZ⋅Z⋅W Operator Connects Slow-Fast Networks for Full Context Interaction.☆39Updated 4 months ago
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta☆122Updated 2 weeks ago
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆30Updated 4 months ago
- [ICLR 2025 Spotlight] Official Implementation for ToST (Token Statistics Transformer)☆114Updated 6 months ago
- Unofficial Implementation of Evolutionary Model Merging☆39Updated last year
- ☆49Updated 7 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆124Updated 3 weeks ago
- ☆85Updated last year
- Autoregressive Image Generation☆32Updated 2 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆226Updated last year
- State Space Models☆70Updated last year
- Implementation of Infini-Transformer in Pytorch☆111Updated 8 months ago