state-spaces / mambaLinks
Mamba SSM architecture
☆15,647Updated this week
Alternatives and similar repositories for mamba
Users that are interested in mamba are comparing it to the libraries listed below
Sorting:
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,847Updated last year
- [ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model☆3,547Updated 6 months ago
- Fast and memory-efficient exact attention☆18,997Updated this week
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,309Updated 8 months ago
- VMamba: Visual State Space Models,code is based on mamba☆2,761Updated 5 months ago
- Structured state space sequence models☆2,706Updated last year
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,202Updated last year
- Foundation Architecture for (M)LLMs☆3,099Updated last year
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,632Updated last month
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆7,724Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,531Updated last week
- ☆11,675Updated 5 months ago
- An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).☆4,443Updated last year
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆9,866Updated last week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,350Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆12,575Updated 8 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,055Updated this week
- Meta-Transformer for Unified Multimodal Learning☆1,622Updated last year
- Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Py…☆23,654Updated last week
- [CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone☆1,662Updated last month
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,841Updated 9 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆23,327Updated last year
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,241Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,055Updated 4 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆3,045Updated this week
- Official repository of the xLSTM.☆1,960Updated 2 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,078Updated 3 months ago
- PyTorch native post-training library☆5,418Updated last week
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆35,053Updated 2 weeks ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,025Updated last year