This package implements THOR: Transformer with Stochastic Experts.
☆64Oct 7, 2021Updated 4 years ago
Alternatives and similar repositories for Stochastic-Mixture-of-Experts
Users that are interested in Stochastic-Mixture-of-Experts are comparing it to the libraries listed below
Sorting:
- ☆19Sep 15, 2022Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆51Jul 17, 2022Updated 3 years ago
- ☆707Dec 6, 2025Updated 3 months ago
- ☆19Oct 31, 2022Updated 3 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆969Dec 21, 2025Updated 2 months ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,232Apr 19, 2024Updated last year
- ☆143Jul 21, 2024Updated last year
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 8 months ago
- Neural Unification for Logic Reasoning over Language☆22Nov 15, 2021Updated 4 years ago
- ☆13May 21, 2023Updated 2 years ago
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆48Feb 18, 2022Updated 4 years ago
- This is the public github for our paper "Transformer with a Mixture of Gaussian Keys"☆28Aug 13, 2022Updated 3 years ago
- 基于PyTorch GPT-2的针对各种数据并行pretrain的研究代码.☆11Dec 16, 2022Updated 3 years ago
- MNASNet implementation and pre-trained model in PyTorch☆10Mar 20, 2019Updated 6 years ago
- Code for the paper "Pretrained Models for Multilingual Federated Learning" at NAACL 2022☆11Aug 9, 2022Updated 3 years ago
- Domain Adaptation and Adapters☆16Feb 28, 2023Updated 3 years ago
- Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)☆16Oct 11, 2021Updated 4 years ago
- Code for COMET: Cardinality Constrained Mixture of Experts with Trees and Local Search☆11Jun 21, 2023Updated 2 years ago
- Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)☆14Nov 1, 2023Updated 2 years ago
- ☆16Dec 9, 2023Updated 2 years ago
- The official repository for the experiments included in the paper titled "Patch-level Routing in Mixture-of-Experts is Provably Sample-ef…☆14Feb 12, 2026Updated 3 weeks ago
- ☆156Aug 24, 2021Updated 4 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆114May 2, 2022Updated 3 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆138Aug 14, 2023Updated 2 years ago
- The Intermediate Goal of the project is to train a GPT like architecture to learn to summarise reddit posts from human preferences, as th…☆12Jul 14, 2021Updated 4 years ago
- Replication package for ICSE 2022 submission titled "Automatic Merge Conflict Resolution Tools: The Current State and Barriers to Adoptio…☆12Sep 14, 2021Updated 4 years ago
- Easily serialize dataclasses to and from tensors (PyTorch, NumPy)☆18Apr 10, 2021Updated 4 years ago
- ☆17Jun 4, 2021Updated 4 years ago
- [EMNLP'23] Code for Generating Data for Symbolic Language with Large Language Models☆18Oct 21, 2023Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆100Sep 5, 2021Updated 4 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Mar 23, 2022Updated 3 years ago
- A fast MoE impl for PyTorch☆1,845Feb 10, 2025Updated last year
- ☆77Apr 29, 2024Updated last year
- ☆22May 3, 2022Updated 3 years ago
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)☆22Oct 17, 2022Updated 3 years ago
- [ACL'21 Findings] Why Machine Reading Comprehension Models Learn Shortcuts?☆16Aug 8, 2023Updated 2 years ago
- Learning adapter weights from task descriptions☆19Nov 12, 2023Updated 2 years ago