state-spaces / mambaLinks
Mamba SSM architecture
☆17,153Updated 3 weeks ago
Alternatives and similar repositories for mamba
Users that are interested in mamba are comparing it to the libraries listed below
Sorting:
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,917Updated last year
- [ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model☆3,795Updated 11 months ago
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,425Updated 2 weeks ago
- Fast and memory-efficient exact attention☆22,113Updated this week
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,336Updated last year
- Structured state space sequence models☆2,838Updated last year
- VMamba: Visual State Space Models,code is based on mamba☆3,039Updated 11 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,486Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,352Updated last week
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,318Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,800Updated this week
- Awesome Papers related to Mamba.☆1,388Updated last year
- Modeling, training, eval, and inference code for OLMo☆6,305Updated 2 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,183Updated 5 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,002Updated 2 weeks ago
- [CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone☆2,016Updated 6 months ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,587Updated last week
- An annotated implementation of the Transformer paper.☆6,993Updated last year
- PyTorch native post-training library☆5,669Updated this week
- Foundation Architecture for (M)LLMs☆3,130Updated last year
- Transformer: PyTorch Implementation of "Attention Is All You Need"☆4,414Updated 6 months ago
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,233Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,891Updated last year
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,212Updated 2 years ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,326Updated this week
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,343Updated 8 months ago
- An open source implementation of CLIP.☆13,353Updated 3 months ago
- A playbook for systematically maximizing the performance of deep learning models.☆29,774Updated last year
- ☆12,275Updated last week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,182Updated last year