This package implements THOR: Transformer with Stochastic Experts.
☆64Oct 7, 2021Updated 4 years ago
Alternatives and similar repositories for Stochastic-Mixture-of-Experts
Users that are interested in Stochastic-Mixture-of-Experts are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆19Sep 15, 2022Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆51Jul 17, 2022Updated 3 years ago
- ☆13May 21, 2023Updated 2 years ago
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆20Oct 31, 2022Updated 3 years ago
- ☆713Dec 6, 2025Updated 3 months ago
- ☆145Jul 21, 2024Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆114May 2, 2022Updated 3 years ago
- Domain Adaptation and Adapters☆16Feb 28, 2023Updated 3 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,240Apr 19, 2024Updated last year
- Codebase for ACL 2023 paper "Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memori…☆52Oct 8, 2023Updated 2 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆138Aug 14, 2023Updated 2 years ago
- Implementation of AAAI 2022 Paper: Go wider instead of deeper☆32Oct 27, 2022Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Code and data for the paper "Multi-Source Domain Adaptation with Mixture of Experts" (EMNLP 2018)☆68Aug 30, 2020Updated 5 years ago
- ☆16Oct 26, 2018Updated 7 years ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41May 31, 2021Updated 4 years ago
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- ☆89Apr 2, 2022Updated 3 years ago
- ☆158Aug 24, 2021Updated 4 years ago
- 基于PyTorch GPT-2的针对各种数据并行pretrain的研究代码.☆11Dec 16, 2022Updated 3 years ago
- [ACL'21 Findings] Why Machine Reading Comprehension Models Learn Shortcuts?☆16Aug 8, 2023Updated 2 years ago
- Code for the paper "Pretrained Models for Multilingual Federated Learning" at NAACL 2022☆11Aug 9, 2022Updated 3 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Source code for "Gradient Based Memory Editing for Task-Free Continual Learning", 4th Lifelong ML Workshop@ICML 2020☆17Dec 8, 2022Updated 3 years ago
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆48Feb 18, 2022Updated 4 years ago
- Neural Unification for Logic Reasoning over Language☆22Nov 15, 2021Updated 4 years ago
- A fast MoE impl for PyTorch☆1,845Feb 10, 2025Updated last year
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆104Dec 1, 2022Updated 3 years ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆44Feb 28, 2026Updated last month
- This is the public github for our paper "Transformer with a Mixture of Gaussian Keys"☆28Aug 13, 2022Updated 3 years ago
- ☆16Dec 9, 2023Updated 2 years ago
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Feb 11, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- [EMNLP'23] Code for Generating Data for Symbolic Language with Large Language Models☆18Oct 21, 2023Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆100Sep 5, 2021Updated 4 years ago
- Mixture of Attention Heads☆52Oct 10, 2022Updated 3 years ago
- Code for paper "Modality Plug-and-Play: Elastic Modality Adaptation in Multimodal LLMs for Embodied AI"☆13Jan 19, 2024Updated 2 years ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆848Sep 13, 2023Updated 2 years ago
- Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)☆14Nov 1, 2023Updated 2 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year