davidmrau / mixture-of-experts
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
☆974Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for mixture-of-experts
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆637Updated last year
- A fast MoE impl for PyTorch☆1,560Updated 4 months ago
- A collection of AWESOME things about mixture-of-experts☆962Updated 3 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆533Updated last week
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆695Updated 6 months ago
- Rotary Transformer☆811Updated 2 years ago
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆728Updated last week
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆347Updated last year
- Pytorch library for fast transformer implementations☆1,642Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆565Updated last month
- ☆572Updated this week
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆291Updated 4 months ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,093Updated 2 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,460Updated this week
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,163Updated last year
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆759Updated 4 months ago
- 2024 up-to-date list of DATASETS, CODEBASES and PAPERS on Multi-Task Learning (MTL), from Machine Learning perspective.☆669Updated this week
- Long Range Arena for Benchmarking Efficient Transformers☆727Updated 10 months ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆506Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆517Updated 2 years ago
- An implementation of local windowed attention for language modeling☆383Updated 2 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,320Updated this week
- Foundation Architecture for (M)LLMs☆3,025Updated 6 months ago
- PyTorch implementation of the InfoNCE loss for self-supervised learning.☆481Updated 11 months ago
- Structured state space sequence models☆2,455Updated 3 months ago
- Diffusion-LM☆1,055Updated 3 months ago
- Reformer, the efficient Transformer, in Pytorch☆2,116Updated last year
- ☆870Updated 5 months ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,026Updated 3 months ago
- Vector (and Scalar) Quantization, in Pytorch☆2,594Updated this week