microsoft / Stochastic-Mixture-of-ExpertsView external linksLinks
This package implements THOR: Transformer with Stochastic Experts.
☆65Oct 7, 2021Updated 4 years ago
Alternatives and similar repositories for Stochastic-Mixture-of-Experts
Users that are interested in Stochastic-Mixture-of-Experts are comparing it to the libraries listed below
Sorting:
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 2 years ago
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆51Jul 17, 2022Updated 3 years ago
- ☆705Dec 6, 2025Updated 2 months ago
- ☆19Oct 31, 2022Updated 3 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆963Dec 21, 2025Updated last month
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,225Apr 19, 2024Updated last year
- ☆143Jul 21, 2024Updated last year
- Neural Unification for Logic Reasoning over Language☆22Nov 15, 2021Updated 4 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 8 months ago
- Lite Self-Training☆30Jul 25, 2023Updated 2 years ago
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆48Feb 18, 2022Updated 3 years ago
- This is the public github for our paper "Transformer with a Mixture of Gaussian Keys"☆28Aug 13, 2022Updated 3 years ago
- Code and data for the paper "Multi-Source Domain Adaptation with Mixture of Experts" (EMNLP 2018)☆69Aug 30, 2020Updated 5 years ago
- Code for the paper "Pretrained Models for Multilingual Federated Learning" at NAACL 2022☆11Aug 9, 2022Updated 3 years ago
- 基于PyTorch GPT-2的针对各种数据并行pretrain的研究代码.☆11Dec 16, 2022Updated 3 years ago
- Domain Adaptation and Adapters☆16Feb 28, 2023Updated 2 years ago
- MNASNet implementation and pre-trained model in PyTorch☆10Mar 20, 2019Updated 6 years ago
- Code for COMET: Cardinality Constrained Mixture of Experts with Trees and Local Search☆11Jun 21, 2023Updated 2 years ago
- Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)☆16Oct 11, 2021Updated 4 years ago
- Codebase for ACL 2023 paper "Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memori…☆52Oct 8, 2023Updated 2 years ago
- Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)☆14Nov 1, 2023Updated 2 years ago
- ☆156Aug 24, 2021Updated 4 years ago
- ☆15Feb 28, 2024Updated last year
- ☆17Mar 3, 2025Updated 11 months ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆136Aug 14, 2023Updated 2 years ago
- The Intermediate Goal of the project is to train a GPT like architecture to learn to summarise reddit posts from human preferences, as th…☆12Jul 14, 2021Updated 4 years ago
- Easily serialize dataclasses to and from tensors (PyTorch, NumPy)☆18Apr 10, 2021Updated 4 years ago
- Replication package for ICSE 2022 submission titled "Automatic Merge Conflict Resolution Tools: The Current State and Barriers to Adoptio…☆12Sep 14, 2021Updated 4 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Sep 5, 2021Updated 4 years ago
- Fast-Slow Recurrent Neural Networks☆14Jan 31, 2018Updated 8 years ago
- ☆17Jun 4, 2021Updated 4 years ago
- [EMNLP'23] Code for Generating Data for Symbolic Language with Large Language Models☆18Oct 21, 2023Updated 2 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Mar 23, 2022Updated 3 years ago
- ISD: Self-Supervised Learning by Iterative Similarity Distillation☆36Oct 12, 2021Updated 4 years ago
- ☆100Dec 8, 2021Updated 4 years ago
- A fast MoE impl for PyTorch☆1,831Feb 10, 2025Updated last year
- [ACL'21 Findings] Why Machine Reading Comprehension Models Learn Shortcuts?☆16Aug 8, 2023Updated 2 years ago
- The WaveFunctionCollapse algorithm in Julia.☆22Jan 2, 2019Updated 7 years ago