davidmrau / mixture-of-expertsLinks
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
☆1,120Updated last year
Alternatives and similar repositories for mixture-of-experts
Users that are interested in mixture-of-experts are comparing it to the libraries listed below
Sorting:
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆766Updated last year
- ☆644Updated this week
- A fast MoE impl for PyTorch☆1,746Updated 4 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆777Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆842Updated this week
- A collection of AWESOME things about mixture-of-experts☆1,143Updated 6 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆633Updated 7 months ago
- Pytorch library for fast transformer implementations☆1,718Updated 2 years ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆341Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆534Updated 3 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,613Updated last week
- Reformer, the efficient Transformer, in Pytorch☆2,171Updated 2 years ago
- Rotary Transformer☆971Updated 3 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,272Updated 3 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆757Updated last year
- Code for ALBEF: a new vision-language pre-training method☆1,667Updated 2 years ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆364Updated last year
- An All-MLP solution for Vision, from Google AI☆1,025Updated 9 months ago
- 2024 up-to-date list of DATASETS, CODEBASES and PAPERS on Multi-Task Learning (MTL), from Machine Learning perspective.☆761Updated 2 weeks ago
- Vector (and Scalar) Quantization, in Pytorch☆3,333Updated last week
- PyTorch implementation of the InfoNCE loss for self-supervised learning.☆558Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆694Updated 6 months ago
- PyTorch implementation of Contrastive Learning methods☆1,985Updated last year
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,212Updated 11 months ago
- An implementation of local windowed attention for language modeling☆454Updated 5 months ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆535Updated last year
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,132Updated 3 years ago
- A quickstart and benchmark for pytorch distributed training.☆1,668Updated 10 months ago
- Collection of papers on state-space models☆595Updated last month
- SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.☆1,080Updated 5 months ago