[ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
☆952Nov 16, 2025Updated 3 months ago
Alternatives and similar repositories for Samba
Users that are interested in Samba are comparing it to the libraries listed below
Sorting:
- Implementation for MatMul-free LM.☆3,057Dec 2, 2025Updated 2 months ago
- Schedule-Free Optimization in PyTorch☆2,256May 21, 2025Updated 9 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- PyTorch implementation of models from the Zamba2 series.☆187Jan 23, 2025Updated last year
- Mamba SSM architecture☆17,257Feb 18, 2026Updated last week
- Accelerated First Order Parallel Associative Scan☆194Jan 7, 2026Updated last month
- Annotated version of the Mamba paper☆497Feb 27, 2024Updated 2 years ago
- Minimalistic large language model 3D-parallelism training☆2,569Feb 19, 2026Updated last week
- Efficient Triton Kernels for LLM Training☆6,162Updated this week
- A PyTorch native platform for training generative AI models☆5,098Updated this week
- Tile primitives for speedy kernels☆3,183Updated this week
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆469Feb 17, 2026Updated last week
- Implementation of Diffusion Transformer (DiT) in JAX☆305Jun 11, 2024Updated last year
- Reference implementation of Megalodon 7B model☆528May 17, 2025Updated 9 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,188Sep 30, 2025Updated 5 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆170Jan 30, 2025Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆203Jul 17, 2024Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Aug 12, 2025Updated 6 months ago
- Helpful tools and examples for working with flex-attention☆1,136Feb 8, 2026Updated 2 weeks ago
- Code for BLT research paper☆2,029Nov 3, 2025Updated 3 months ago
- Understand and test language model architectures on synthetic tasks.☆254Updated this week
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆942Mar 3, 2024Updated last year
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,921Mar 8, 2024Updated last year
- Tools for merging pretrained large language models.☆6,814Jan 26, 2026Updated last month
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,083Jul 29, 2024Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Oct 11, 2025Updated 4 months ago
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,558Jan 14, 2026Updated last month
- ☆19Dec 4, 2025Updated 2 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆663Feb 6, 2026Updated 3 weeks ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- Fast and memory-efficient exact attention☆22,361Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,891May 3, 2024Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆665Jun 1, 2024Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆83May 17, 2024Updated last year
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,182Updated this week