Mamba SSM architecture
☆17,257Feb 18, 2026Updated last week
Alternatives and similar repositories for mamba
Users that are interested in mamba are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model☆3,805Feb 13, 2025Updated last year
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,921Mar 8, 2024Updated last year
- VMamba: Visual State Space Models,code is based on mamba☆3,046Mar 7, 2025Updated 11 months ago
- Fast and memory-efficient exact attention☆22,361Updated this week
- Structured state space sequence models☆2,854Jul 17, 2024Updated last year
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,433Jan 26, 2026Updated last month
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,382May 31, 2024Updated last year
- Kolmogorov Arnold Networks☆16,174Jan 19, 2025Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,478Aug 12, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,375Feb 21, 2026Updated last week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,033Jan 23, 2026Updated last month
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,678Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,234Updated this week
- Awesome Papers related to Mamba.☆1,390Oct 17, 2024Updated last year
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,353Feb 20, 2026Updated last week
- Causal depthwise conv1d in CUDA, with a PyTorch interface☆730Feb 18, 2026Updated last week
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,642Feb 18, 2026Updated last week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,706Updated this week
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- [CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone☆2,034Feb 9, 2026Updated 3 weeks ago
- Development repository for the Triton language and compiler☆18,501Updated this week
- The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoi…☆53,497Sep 18, 2024Updated last year
- Ongoing research training transformer models at scale☆15,461Updated this week
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆942Mar 3, 2024Updated 2 years ago
- 🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.☆32,873Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,997Updated this week
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆36,397Feb 23, 2026Updated last week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,285Dec 17, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,187Jul 11, 2024Updated last year
- [NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Mod…☆8,626Nov 10, 2025Updated 3 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,905Updated this week
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…