lucidrains / soft-moe-pytorch
Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch
☆233Updated 4 months ago
Related projects: ⓘ
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆278Updated 3 months ago
- ☆164Updated 8 months ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆87Updated 8 months ago
- Some preliminary explorations of Mamba's context scaling.☆184Updated 7 months ago
- Implementation of Infini-Transformer in Pytorch☆100Updated last month
- Official code for "TOAST: Transfer Learning via Attention Steering"☆186Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆105Updated 3 weeks ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆186Updated 3 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆104Updated 6 months ago
- When do we not need larger vision models?☆314Updated last month
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆115Updated last month
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆206Updated last month
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆391Updated 7 months ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆202Updated 3 weeks ago
- Recurrent Memory Transformer☆148Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆161Updated last week
- Language Quantized AutoEncoders☆94Updated last year
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning☆77Updated last week
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆157Updated last month
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆94Updated last month
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆62Updated 11 months ago
- ☆169Updated this week
- ☆190Updated last week
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆101Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆87Updated 8 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆223Updated 7 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆222Updated last week
- Implementation of TiTok, proposed by Bytedance in "An Image is Worth 32 Tokens for Reconstruction and Generation"☆159Updated 2 months ago
- Implementation of Block Recurrent Transformer - Pytorch☆211Updated 3 weeks ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆101Updated 9 months ago