microsoft / Samba
Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"
β855Updated last month
Alternatives and similar repositories for Samba:
Users that are interested in Samba are comparing it to the libraries listed below
- Minimalistic large language model 3D-parallelism trainingβ1,701Updated this week
- Mamba-Chat: A chat LLM based on the state-space model architecture πβ922Updated last year
- Recipes to scale inference-time compute of open modelsβ1,041Updated 3 weeks ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ506Updated 4 months ago
- A repository for research on medium sized language models.β493Updated 2 months ago
- Code for BLT research paperβ1,436Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backendsβ1,313Updated this week
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"β548Updated 2 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)β765Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purposeβ935Updated 2 weeks ago
- Large Context Attentionβ690Updated last month
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β705Updated 5 months ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ535Updated last month
- β501Updated 4 months ago
- Code for Quiet-STaRβ721Updated 7 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Modelsβ1,481Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)β1,444Updated last month
- Official repository for ORPOβ444Updated 9 months ago
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ505Updated last week
- A bibliography and survey of the papers surrounding o1β1,180Updated 4 months ago
- System 2 Reasoning Link Collectionβ811Updated this week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.β783Updated 2 weeks ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuningβ646Updated 9 months ago
- Helpful tools and examples for working with flex-attentionβ689Updated last week
- Training Large Language Model to Reason in a Continuous Latent Spaceβ985Updated last month
- An Open Source Toolkit For LLM Distillationβ540Updated 2 months ago
- A family of compressed models obtained via pruning and knowledge distillationβ330Updated 4 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAIβ1,372Updated 11 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β398Updated 3 months ago