goombalab / phi-mamba
Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models)
☆61Updated this week
Related projects: ⓘ
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆87Updated 8 months ago
- PyTorch implementation of models from the Zamba2 series.☆63Updated last month
- Some preliminary explorations of Mamba's context scaling.☆184Updated 7 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆104Updated 6 months ago
- ☆50Updated last month
- ☆41Updated 2 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆94Updated last month
- ☆48Updated 3 months ago
- Understand and test language model architectures on synthetic tasks.☆156Updated 4 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆105Updated 3 weeks ago
- Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆130Updated this week
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆56Updated this week
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆206Updated last month
- ☆42Updated 3 months ago
- Official Implementation Of The Paper: `DeciMamba: Exploring the Length Extrapolation Potential of Mamba'☆18Updated last month
- Language models scale reliably with over-training and on downstream tasks☆91Updated 5 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆94Updated 2 weeks ago
- ☆190Updated last week
- HGRN2: Gated Linear RNNs with State Expansion☆46Updated 3 weeks ago
- ☆35Updated 5 months ago
- ☆42Updated this week
- Implementation of GateLoop Transformer in Pytorch and Jax☆86Updated 3 months ago
- Implementation of Infini-Transformer in Pytorch☆100Updated last month
- Here we will test various linear attention designs.☆55Updated 4 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆34Updated 10 months ago
- ☆47Updated 3 months ago
- A repository for research on medium sized language models.☆71Updated 3 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆84Updated 4 months ago
- Mixture of A Million Experts☆29Updated last month
- Simplified Masked Diffusion Language Model☆160Updated last week