Weixin-Liang / Mixture-of-MambaLinks
☆49Updated 7 months ago
Alternatives and similar repositories for Mixture-of-Mamba
Users that are interested in Mixture-of-Mamba are comparing it to the libraries listed below
Sorting:
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆105Updated last week
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains☆47Updated 3 months ago
- ☆47Updated last year
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆110Updated 2 weeks ago
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆29Updated 4 months ago
- A repository for DenseSSMs☆88Updated last year
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆226Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated this week
- Awesome list of papers that extend Mamba to various applications.☆136Updated 2 months ago
- Official implementation of RMoE (Layerwise Recurrent Router for Mixture-of-Experts)☆22Updated last year
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆204Updated 2 weeks ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year
- The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".☆16Updated 4 months ago
- Unofficial Implementation of Selective Attention Transformer☆17Updated 10 months ago
- ☆72Updated 6 months ago
- Official Code for Paper: Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation☆121Updated last month
- ☆15Updated 2 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆30Updated 3 months ago
- ☆34Updated 5 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated 10 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆51Updated 5 months ago
- ML-Mamba: Efficient Multi-Modal Large Language Model Utilizing Mamba-2☆66Updated 9 months ago
- [ICLR 2025] Official Code Release for Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation