Mamba-Chat: A chat LLM based on the state-space model architecture π
β940Mar 3, 2024Updated 2 years ago
Alternatives and similar repositories for mamba-chat
Users that are interested in mamba-chat are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Mamba SSM architectureβ17,979Updated this week
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.β2,944Mar 8, 2024Updated 2 years ago
- Implementation of the Mamba SSM with hf_integration.β55Aug 31, 2024Updated last year
- Inference of Mamba, Mamba2 and Mamba3 models in pure Cβ199Mar 18, 2026Updated 3 weeks ago
- This is the code that went into our practical dive using mamba as information extractionβ57Dec 22, 2023Updated 2 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits β’ AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Some preliminary explorations of Mamba's context scaling.β219Feb 8, 2024Updated 2 years ago
- A simple and efficient Mamba implementation in pure PyTorch and MLX.β1,454Jan 26, 2026Updated 2 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ956Nov 16, 2025Updated 5 months ago
- β31Dec 29, 2023Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.β8,933May 3, 2024Updated last year
- Code repository for Black Mambaβ262Feb 8, 2024Updated 2 years ago
- Annotated version of the Mamba paperβ500Feb 27, 2024Updated 2 years ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAIβ1,407Apr 11, 2024Updated 2 years ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)β¦β14,471Mar 30, 2026Updated 2 weeks ago
- GPUs on demand by Runpod - Special Offer Available β’ AdRun AI, ML, and HPC workloads on powerful cloud GPUsβwithout limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplestβ¦β464Mar 27, 2026Updated 2 weeks ago
- Tools for merging pretrained large language models.β6,973Mar 15, 2026Updated last month
- Repository for StripedHyena, a state-of-the-art beyond Transformer architectureβ422Mar 7, 2024Updated 2 years ago
- Awesome list of papers that extend Mamba to various applications.β139Apr 8, 2026Updated last week
- YaRN: Efficient Context Window Extension of Large Language Modelsβ1,695Apr 17, 2024Updated last year
- Go ahead and axolotl questionsβ11,688Updated this week
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zetaβ125Updated this week
- β63Dec 8, 2023Updated 2 years ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinksβ7,211Jul 11, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- β210Jan 14, 2026Updated 3 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modelingβ219Updated this week
- Structured state space sequence modelsβ2,875Jul 17, 2024Updated last year
- Robust recipes to align language models with human and AI preferencesβ5,558Apr 8, 2026Updated last week
- β‘ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Plβ¦β2,175Oct 8, 2024Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIMβ61Apr 8, 2024Updated 2 years ago
- PyTorch native post-training libraryβ5,728Updated this week
- Training LLMs with QLoRA + FSDPβ1,538Nov 9, 2024Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"β211Updated this week
- Deploy open-source AI quickly and easily - Bonus Offer β’ AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.β13,297Updated this week
- The official implementation of Self-Play Fine-Tuning (SPIN)β1,237May 8, 2024Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attentionβ119Jan 7, 2024Updated 2 years ago
- Using multiple LLMs for ensemble Forecastingβ16Jan 17, 2024Updated 2 years ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Callingβ1,837Jul 10, 2024Updated last year
- Reading list for research topics in state-space modelsβ357Jun 11, 2025Updated 10 months ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMathβ9,471Jun 7, 2025Updated 10 months ago