microsoft / SambaLinks
[ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
β909Updated 4 months ago
Alternatives and similar repositories for Samba
Users that are interested in Samba are comparing it to the libraries listed below
Sorting:
- Mamba-Chat: A chat LLM based on the state-space model architecture πβ930Updated last year
- Recipes to scale inference-time compute of open modelsβ1,110Updated 3 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"β559Updated 8 months ago
- Minimalistic large language model 3D-parallelism trainingβ2,191Updated 2 weeks ago
- Annotated version of the Mamba paperβ489Updated last year
- Code for BLT research paperβ1,983Updated 3 months ago
- OLMoE: Open Mixture-of-Experts Language Modelsβ863Updated 6 months ago
- Open weights language model from Google DeepMind, based on Griffin.β651Updated 3 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ537Updated 4 months ago
- Pretraining and inference code for a large-scale depth-recurrent language modelβ827Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)β878Updated last week
- System 2 Reasoning Link Collectionβ852Updated 6 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β343Updated 9 months ago
- An Open Source Toolkit For LLM Distillationβ724Updated 2 months ago
- A repository for research on medium sized language models.β510Updated 3 months ago
- Reference implementation of Megalodon 7B modelβ522Updated 4 months ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ572Updated 7 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β291Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,603Updated 10 months ago
- Stanford NLP Python library for Representation Finetuning (ReFT)β1,512Updated 7 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAIβ1,402Updated last year
- β565Updated last year
- Training Large Language Model to Reason in a Continuous Latent Spaceβ1,265Updated last month
- Muon is an optimizer for hidden layers in neural networksβ1,710Updated 2 months ago
- Build high-performance AI models with modular building blocksβ550Updated last week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024β334Updated 4 months ago
- A family of compressed models obtained via pruning and knowledge distillationβ351Updated 10 months ago
- Official repository for ORPOβ464Updated last year
- β539Updated 9 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.β813Updated last month