state-spaces / mambaView external linksLinks
Mamba SSM architecture
☆17,186Jan 12, 2026Updated last month
Alternatives and similar repositories for mamba
Users that are interested in mamba are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model☆3,795Feb 13, 2025Updated last year
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,918Mar 8, 2024Updated last year
- VMamba: Visual State Space Models,code is based on mamba☆3,041Mar 7, 2025Updated 11 months ago
- Fast and memory-efficient exact attention☆22,231Updated this week
- Structured state space sequence models☆2,842Jul 17, 2024Updated last year
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,429Jan 26, 2026Updated 3 weeks ago
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,352May 31, 2024Updated last year
- Kolmogorov Arnold Networks☆16,164Jan 19, 2025Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,446Aug 12, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,351Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,021Jan 23, 2026Updated 3 weeks ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,619Feb 9, 2026Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆70,205Updated this week
- Awesome Papers related to Mamba.☆1,389Oct 17, 2024Updated last year
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,336Feb 5, 2026Updated last week
- Causal depthwise conv1d in CUDA, with a PyTorch interface☆717Jan 12, 2026Updated last month
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,562Jul 23, 2024Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,627Updated this week
- Train transformer language models with reinforcement learning.☆17,360Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- [CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone☆2,023Feb 9, 2026Updated last week
- Development repository for the Triton language and compiler☆18,429Updated this week
- The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoi…☆53,411Sep 18, 2024Updated last year
- Ongoing research training transformer models at scale☆15,213Updated this week
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆940Mar 3, 2024Updated last year
- 🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.☆32,768Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,952Updated this week
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆36,351Updated this week
- Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Py…☆24,993Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,248Dec 17, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,188Jul 11, 2024Updated last year
- [NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Mod…☆8,614Nov 10, 2025Updated 3 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,547Updated this week
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆156,440Updated this week
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,393Dec 22, 2025Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,837Jun 10, 2024Updated last year
- This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".☆15,709Jul 24, 2024Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,889May 3, 2024Updated last year
- A PyTorch native platform for training generative AI models☆5,069Updated this week