microsoft / SambaLinks
[ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
☆915Updated 5 months ago
Alternatives and similar repositories for Samba
Users that are interested in Samba are comparing it to the libraries listed below
Sorting:
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆933Updated last year
- Recipes to scale inference-time compute of open models☆1,111Updated 5 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆884Updated this week
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆292Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆560Updated 10 months ago
- Reference implementation of Megalodon 7B model☆522Updated 5 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆652Updated 4 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆542Updated 5 months ago
- Minimalistic large language model 3D-parallelism training☆2,267Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆888Updated last month
- Pretraining and inference code for a large-scale depth-recurrent language model☆836Updated last week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆344Updated 5 months ago
- Annotated version of the Mamba paper☆489Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆354Updated 11 months ago
- Code for BLT research paper☆1,999Updated 5 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆342Updated 10 months ago
- A repository for research on medium sized language models.☆515Updated 4 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,612Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆747Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆388Updated last year
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆574Updated 8 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,399Updated last year
- An Open Source Toolkit For LLM Distillation☆744Updated 3 months ago
- The repository for the code of the UltraFastBERT paper☆518Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,207Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,614Updated last year
- nanoGPT style version of Llama 3.1☆1,438Updated last year
- ☆572Updated last year
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,155Updated 8 months ago
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,518Updated 8 months ago