kyegomez / MambaTransformerLinks
Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling
☆193Updated last month
Alternatives and similar repositories for MambaTransformer
Users that are interested in MambaTransformer are comparing it to the libraries listed below
Sorting:
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆222Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆169Updated 2 months ago
- Simba☆207Updated last year
- Awesome list of papers that extend Mamba to various applications.☆133Updated 2 months ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆55Updated 2 months ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆105Updated last month
- Minimal Mamba-2 implementation in PyTorch☆198Updated 11 months ago
- ☆133Updated last year
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆68Updated 5 months ago
- PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model …☆67Updated 7 months ago
- Notes on the Mamba and the S4 model (Mamba: Linear-Time Sequence Modeling with Selective State Spaces)☆168Updated last year
- Pytorch implementation of the xLSTM model by Beck et al. (2024)☆165Updated 9 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆137Updated 4 months ago
- Code repository for Black Mamba☆246Updated last year
- Trying out the Mamba architecture on small examples (cifar-10, shakespeare char level etc.)☆46Updated last year
- Reading list for research topics in state-space models☆292Updated last week
- KAN for Vision Transformer☆247Updated 7 months ago
- Collection of papers on state-space models☆593Updated last month
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆118Updated 7 months ago
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆118Updated 2 months ago
- Causal depthwise conv1d in CUDA, with a PyTorch interface☆471Updated last week
- Cuda implementation of Extended Long Short Term Memory (xLSTM) with C++ and PyTorch ports☆87Updated 11 months ago
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architecture☆129Updated last year
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta☆118Updated 2 months ago
- A curated collection of papers, tutorials, videos, and other valuable resources related to Mamba.☆574Updated this week
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆83Updated 3 months ago
- xLSTM as Generic Vision Backbone☆478Updated 7 months ago
- Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis☆236Updated 3 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year