microsoft / SambaLinks
[ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
☆915Updated 5 months ago
Alternatives and similar repositories for Samba
Users that are interested in Samba are comparing it to the libraries listed below
Sorting:
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆931Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆560Updated 9 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆830Updated last month
- Official implementation of Half-Quadratic Quantization (HQQ)☆879Updated last month
- Reference implementation of Megalodon 7B model☆522Updated 4 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆291Updated last year
- Recipes to scale inference-time compute of open models☆1,108Updated 4 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆651Updated 4 months ago
- Code for BLT research paper☆1,987Updated 4 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,610Updated 11 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆540Updated 4 months ago
- Minimalistic large language model 3D-parallelism training☆2,246Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆875Updated 2 weeks ago
- System 2 Reasoning Link Collection☆853Updated 6 months ago
- An Open Source Toolkit For LLM Distillation☆732Updated 2 months ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆575Updated 7 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆342Updated 9 months ago
- SONAR, a new multilingual and multimodal fixed-size sentence embedding space, with a full suite of speech and text encoders and decoders.☆825Updated 2 months ago
- Annotated version of the Mamba paper☆490Updated last year
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,151Updated 8 months ago
- ☆531Updated 2 weeks ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆437Updated 4 months ago
- The repository for the code of the UltraFastBERT paper☆518Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆352Updated 10 months ago
- ☆1,034Updated 9 months ago
- nanoGPT style version of Llama 3.1☆1,427Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,605Updated last year
- Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"☆563Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆338Updated 5 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆747Updated last year