fkodom / soft-mixture-of-experts
PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)
☆71Updated last year
Alternatives and similar repositories for soft-mixture-of-experts:
Users that are interested in soft-mixture-of-experts are comparing it to the libraries listed below
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆50Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated last year
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆260Updated 9 months ago
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorch☆96Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆92Updated 6 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆62Updated 9 months ago
- Implementation of Agent Attention in Pytorch☆89Updated 7 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 4 months ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆127Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆55Updated 9 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 8 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆115Updated 4 months ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated last year
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆22Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- Implementation of a Light Recurrent Unit in Pytorch☆48Updated 4 months ago
- The official Pytorch implementation of the paper "Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT …☆32Updated 11 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆52Updated 6 months ago
- ☆99Updated 11 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆30Updated last year
- Implementation of Infini-Transformer in Pytorch☆109Updated last month
- Code accompanying the paper "Massive Activations in Large Language Models"☆140Updated 11 months ago
- ☆53Updated last year
- A repository for DenseSSMs☆86Updated 10 months ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆79Updated 11 months ago
- Implementation of Bitune: Bidirectional Instruction-Tuning☆19Updated 8 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆97Updated last year
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆51Updated 5 months ago