vasqu / mamba2-torch
☆38Updated 2 months ago
Alternatives and similar repositories for mamba2-torch:
Users that are interested in mamba2-torch are comparing it to the libraries listed below
- Minimal Mamba-2 implementation in PyTorch☆166Updated 7 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆155Updated this week
- Causal depthwise conv1d in CUDA, with a PyTorch interface☆383Updated last month
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆211Updated 8 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆119Updated 5 months ago
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆60Updated last month
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆104Updated last year
- ☆195Updated 3 weeks ago
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆78Updated 10 months ago
- Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures☆398Updated 2 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆149Updated 8 months ago
- Awesome list of papers that extend Mamba to various applications.☆129Updated last month
- Collection of papers on state-space models☆572Updated this week
- Simba☆199Updated 10 months ago
- Reading list for research topics in state-space models☆257Updated last week
- ☆244Updated 4 months ago
- [ICLR2025] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆482Updated this week
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆106Updated 3 months ago
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,112Updated last month
- Official repository of MLLA (NeurIPS 2024)☆271Updated 2 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆182Updated this week
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆249Updated 9 months ago
- KAN for Vision Transformer☆240Updated 3 months ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆51Updated this week
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta☆114Updated this week
- Some preliminary explorations of Mamba's context scaling.☆209Updated 11 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆191Updated this week
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆397Updated 5 months ago
- Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone☆894Updated last month
- A repository for DenseSSMs☆86Updated 9 months ago