microsoft / SambaLinks
[ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
β888Updated 2 months ago
Alternatives and similar repositories for Samba
Users that are interested in Samba are comparing it to the libraries listed below
Sorting:
- Mamba-Chat: A chat LLM based on the state-space model architecture πβ926Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β290Updated last year
- Code for BLT research paperβ1,725Updated last month
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"β555Updated 6 months ago
- Open weights language model from Google DeepMind, based on Griffin.β644Updated last month
- Recipes to scale inference-time compute of open modelsβ1,101Updated last month
- Reference implementation of Megalodon 7B modelβ520Updated last month
- Pretraining code for a large-scale depth-recurrent language modelβ793Updated 3 weeks ago
- A repository for research on medium sized language models.β502Updated last month
- Official implementation of Half-Quadratic Quantization (HQQ)β842Updated last week
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,574Updated 8 months ago
- Minimalistic large language model 3D-parallelism trainingβ2,012Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β341Updated 7 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ526Updated last month
- OLMoE: Open Mixture-of-Experts Language Modelsβ798Updated 3 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β427Updated last month
- An Open Source Toolkit For LLM Distillationβ669Updated last month
- A family of compressed models obtained via pruning and knowledge distillationβ343Updated 7 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024β314Updated 2 months ago
- The repository for the code of the UltraFastBERT paperβ516Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β734Updated 9 months ago
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reductionβ388Updated last year
- β523Updated 7 months ago
- Train Models Contrastively in Pytorchβ727Updated 3 months ago
- Muon is an optimizer for hidden layers in neural networksβ988Updated this week
- β544Updated 10 months ago
- Code for Quiet-STaRβ734Updated 10 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.β484Updated 10 months ago
- Official inference library for pre-processing of Mistral modelsβ755Updated this week
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ563Updated 5 months ago