[ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
☆958Nov 16, 2025Updated 5 months ago
Alternatives and similar repositories for Samba
Users that are interested in Samba are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Schedule-Free Optimization in PyTorch☆2,274May 21, 2025Updated 11 months ago
- Implementation for MatMul-free LM.☆3,056Dec 2, 2025Updated 4 months ago
- PyTorch implementation of models from the Zamba2 series.☆193Jan 23, 2025Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- 🚀 Efficient implementations for emerging model architectures☆4,999Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆252Jun 6, 2025Updated 10 months ago
- Mamba SSM architecture☆18,069Apr 16, 2026Updated last week
- Annotated version of the Mamba paper☆499Feb 27, 2024Updated 2 years ago
- Implementation of Diffusion Transformer (DiT) in JAX☆311Jun 11, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,663Apr 7, 2026Updated 3 weeks ago
- Efficient Triton Kernels for LLM Training☆6,298Apr 18, 2026Updated last week
- Accelerated First Order Parallel Associative Scan☆197Jan 7, 2026Updated 3 months ago
- A PyTorch native platform for training generative AI models☆5,258Updated this week
- Tile primitives for speedy kernels☆3,326Updated this week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated 2 years ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆205Jul 17, 2024Updated last year
- Helpful tools and examples for working with flex-attention☆1,179Apr 13, 2026Updated 2 weeks ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,207Apr 8, 2026Updated 3 weeks ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆555Mar 13, 2026Updated last month
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆600Aug 12, 2025Updated 8 months ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated 2 years ago
- gpt-2 from scratch in mlx☆424Jun 12, 2024Updated last year
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆940Mar 3, 2024Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆171Jan 30, 2025Updated last year
- Understand and test language model architectures on synthetic tasks.☆265Mar 22, 2026Updated last month
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,944Mar 8, 2024Updated 2 years ago
- Some preliminary explorations of Mamba's context scaling.☆219Feb 8, 2024Updated 2 years ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆239Oct 14, 2025Updated 6 months ago
- Code for BLT research paper☆2,035Nov 3, 2025Updated 5 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆670Feb 6, 2026Updated 2 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,094Jul 29, 2024Updated last year
- ☆19Dec 4, 2025Updated 4 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆664Jun 1, 2024Updated last year
- Reference implementation of Megalodon 7B model☆526May 17, 2025Updated 11 months ago
- Tools for merging pretrained large language models.☆7,023Mar 15, 2026Updated last month
- Fast and memory-efficient exact attention☆23,563Updated this week
- ☆59Jul 9, 2024Updated last year
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,326Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,947May 3, 2024Updated last year