fkodom / soft-mixture-of-experts
PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)
☆71Updated last year
Alternatives and similar repositories for soft-mixture-of-experts:
Users that are interested in soft-mixture-of-experts are comparing it to the libraries listed below
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆53Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 6 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆49Updated 2 years ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- ☆27Updated 2 months ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆54Updated 4 months ago
- Implementation of Infini-Transformer in Pytorch☆110Updated 3 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated 11 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆48Updated 2 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆31Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆118Updated 5 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆97Updated 7 months ago
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorch☆97Updated last year
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆53Updated 7 months ago
- ☆21Updated 2 years ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆282Updated last week
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆26Updated last year
- PyTorch implementation of LIMoE☆53Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 7 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆31Updated last month
- Implementation of Agent Attention in Pytorch☆90Updated 9 months ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆39Updated 5 months ago
- Patching open-vocabulary models by interpolating weights☆91Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆123Updated 7 months ago
- Official repo for the paper "Weight-based Decomposition: A Case for Bilinear MLPs"☆20Updated 4 months ago
- ☆37Updated last year
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆51Updated 9 months ago