redotvideo / mamba-chatView external linksLinks
Mamba-Chat: A chat LLM based on the state-space model architecture π
β940Mar 3, 2024Updated last year
Alternatives and similar repositories for mamba-chat
Users that are interested in mamba-chat are comparing it to the libraries listed below
Sorting:
- Mamba SSM architectureβ17,153Jan 12, 2026Updated last month
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.β2,918Mar 8, 2024Updated last year
- Implementation of the Mamba SSM with hf_integration.β55Aug 31, 2024Updated last year
- This is the code that went into our practical dive using mamba as information extractionβ57Dec 22, 2023Updated 2 years ago
- A simple and efficient Mamba implementation in pure PyTorch and MLX.β1,429Jan 26, 2026Updated 2 weeks ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ944Nov 16, 2025Updated 2 months ago
- Inference of Mamba and Mamba2 models in pure Cβ196Jan 22, 2026Updated 3 weeks ago
- Some preliminary explorations of Mamba's context scaling.β218Feb 8, 2024Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.β8,891May 3, 2024Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAIβ1,407Apr 11, 2024Updated last year
- Code repository for Black Mambaβ261Feb 8, 2024Updated 2 years ago
- β31Dec 29, 2023Updated 2 years ago
- Annotated version of the Mamba paperβ496Feb 27, 2024Updated last year
- Repository for StripedHyena, a state-of-the-art beyond Transformer architectureβ409Mar 7, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)β¦β14,351Updated this week
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplestβ¦β461Jan 19, 2026Updated 3 weeks ago
- Tools for merging pretrained large language models.β6,783Jan 26, 2026Updated 2 weeks ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinksβ7,188Jul 11, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Modelsβ1,668Apr 17, 2024Updated last year
- Go ahead and axolotl questionsβ11,289Updated this week
- Awesome list of papers that extend Mamba to various applications.β138Jun 10, 2025Updated 8 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)β1,234May 8, 2024Updated last year
- Robust recipes to align language models with human and AI preferencesβ5,495Sep 8, 2025Updated 5 months ago
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zetaβ125Feb 6, 2026Updated last week
- β‘ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Plβ¦β2,174Oct 8, 2024Updated last year
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Callingβ1,822Jul 10, 2024Updated last year
- Training LLMs with QLoRA + FSDPβ1,539Nov 9, 2024Updated last year
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modelingβ213Jan 30, 2026Updated 2 weeks ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.β13,155Updated this week
- β208Jan 14, 2026Updated 3 weeks ago
- β62Dec 8, 2023Updated 2 years ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Headsβ2,705Jun 25, 2024Updated last year
- PyTorch native post-training libraryβ5,669Updated this week
- Official inference library for Mistral modelsβ10,664Nov 21, 2025Updated 2 months ago
- Structured state space sequence modelsβ2,838Jul 17, 2024Updated last year
- β35Nov 22, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMathβ9,477Jun 7, 2025Updated 8 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUsβ4,440Dec 9, 2025Updated 2 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Modelsβ1,657Mar 8, 2024Updated last year