jzhang38 / LongMamba
Some preliminary explorations of Mamba's context scaling.
☆184Updated 7 months ago
Related projects: ⓘ
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆206Updated last month
- Understand and test language model architectures on synthetic tasks.☆156Updated 4 months ago
- ☆168Updated this week
- ☆129Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆105Updated 3 weeks ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆61Updated this week
- Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆127Updated this week
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆233Updated 4 months ago
- ☆117Updated 7 months ago
- ☆171Updated last week
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆123Updated 2 months ago
- Token Omission Via Attention☆118Updated 7 months ago
- Language models scale reliably with over-training and on downstream tasks☆91Updated 5 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆94Updated 2 weeks ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆278Updated 3 months ago
- ☆174Updated 4 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆166Updated 3 weeks ago
- Griffin MQA + Hawk Linear RNN Hybrid☆81Updated 4 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆104Updated 6 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆120Updated 4 months ago
- ☆191Updated 3 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆87Updated 8 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆84Updated 4 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆63Updated 3 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆283Updated 2 weeks ago
- ☆239Updated 10 months ago
- A curated reading list of research in Adaptive Computation, Dynamic Compute & Mixture of Experts (MoE). Inference time compute as seen in…☆123Updated last month
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆290Updated 5 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆161Updated last week
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆95Updated 3 months ago