PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
☆1,232Apr 19, 2024Updated last year
Alternatives and similar repositories for mixture-of-experts
Users that are interested in mixture-of-experts are comparing it to the libraries listed below
Sorting:
- A fast MoE impl for PyTorch☆1,845Feb 10, 2025Updated last year
- A collection of AWESOME things about mixture-of-experts☆1,269Dec 8, 2024Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆973Updated this week
- ☆707Dec 6, 2025Updated 3 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆661Oct 30, 2024Updated last year
- This package implements THOR: Transformer with Stochastic Experts.☆64Oct 7, 2021Updated 4 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,664Mar 8, 2024Updated 2 years ago
- Fast and memory-efficient exact attention☆22,460Updated this week
- ☆144Jul 21, 2024Updated last year
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,801Mar 1, 2026Updated last week
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆114May 2, 2022Updated 3 years ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Mar 3, 2026Updated last week
- PyTorch implementation of LIMoE☆52Apr 1, 2024Updated last year
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,040Jan 23, 2026Updated last month
- Train transformer language models with reinforcement learning.☆17,523Updated this week
- ☆89Apr 2, 2022Updated 3 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,717Mar 3, 2026Updated last week
- Mamba SSM architecture☆17,311Feb 18, 2026Updated 2 weeks ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,002Dec 6, 2024Updated last year
- Ongoing research training transformer models at scale☆15,535Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,299Dec 17, 2024Updated last year
- An open source implementation of CLIP.☆13,460Feb 27, 2026Updated last week
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,303Jul 15, 2025Updated 7 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- Transformer related optimization, including BERT, GPT☆6,398Mar 27, 2024Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,759Updated this week
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,393May 31, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,474Mar 3, 2026Updated last week
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,739Updated this week
- Latest Advances on Multimodal Large Language Models☆17,416Updated this week
- Example models using DeepSpeed☆6,797Updated this week
- PyTorch extensions for high performance and large scale training.☆3,400Apr 26, 2025Updated 10 months ago
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆136Jan 17, 2026Updated last month
- Google Research☆37,403Updated this week
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆36,461Updated this week
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,176Sep 30, 2025Updated 5 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,500Aug 12, 2024Updated last year
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,317Dec 9, 2025Updated 3 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,528Updated this week