MzeroMiko / mamba-miniLinks
An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivation. It is probably the code which is the most close to selective_scan_cuda in mamba.
☆101Updated 2 months ago
Alternatives and similar repositories for mamba-mini
Users that are interested in mamba-mini are comparing it to the libraries listed below
Sorting:
- Code Implementation of EfficientVMamba☆240Updated last year
- [ICLR 2025 Spotlight] Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures☆536Updated 10 months ago
- Official repository of MLLA (NeurIPS 2024)☆369Updated 6 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆232Updated 2 months ago
- Code for paper LocalMamba: Visual State Space Model with Windowed Selective Scan☆273Updated last year
- [ICCV2025] Introduce Mamba2 to Vision.☆180Updated 2 months ago
- Awesome list of papers that extend Mamba to various applications.☆139Updated 7 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆90Updated 7 months ago
- GroupMamba: Parameter-Efficient and Accurate Group Visual State Space Model [CVPR -2025]☆129Updated 9 months ago
- The official repo for [TPAMI'23] "Vision Transformer with Quadrangle Attention"☆234Updated 3 months ago
- ☆79Updated 10 months ago
- Official repository of Slide-Transformer (CVPR2023)☆174Updated last year
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆108Updated last year
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆76Updated last year
- Mamba in Vision: A Comprehensive Survey of Techniques and Applications☆133Updated last year
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆105Updated last year
- Ofiicial Implementation for Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data☆65Updated last year
- Minimal Mamba-2 implementation in PyTorch☆240Updated last year
- Open source implementation of "Vision Transformers Need Registers"☆206Updated this week
- Official repository of Polarity-aware Linear Attention for Vision Transformers (ICLR 2025)☆83Updated 2 months ago
- Implementation of Vision Mamba from the paper: "Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Mod…☆481Updated 3 weeks ago
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆129Updated 9 months ago
- [ICLR2025] Spatial-Mamba: Effective Visual State Space Models via Structure-Aware State Fusion☆187Updated 10 months ago
- Official repository of InLine attention (NeurIPS 2024)☆57Updated last year
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆225Updated last year
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling & Bootstrap Masked Visual Modeling via Hard Patch Mining☆107Updated 8 months ago
- (CVPR2024)RMT: Retentive Networks Meet Vision Transformer☆373Updated last year
- [IEEE TCSVT] Vivim: a Video Vision Mamba for Medical Video Segmentation☆185Updated 7 months ago
- PyTorch reimplementation of FlexiViT: One Model for All Patch Sizes☆66Updated last year
- Implementation of: Hydra Attention: Efficient Attention with Many Heads (https://arxiv.org/abs/2209.07484)☆14Updated 3 years ago