[ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal, Shiwei Liu, Zhangyang Wang
☆56Feb 28, 2023Updated 3 years ago
Alternatives and similar repositories for Random-MoE-as-Dropout
Users that are interested in Random-MoE-as-Dropout are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆64Oct 7, 2021Updated 4 years ago
- Mixture of Attention Heads☆52Oct 10, 2022Updated 3 years ago
- ☆29May 24, 2024Updated last year
- [ACL 2023 Findings] Emergent Modularity in Pre-trained Transformers☆26Jun 7, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆13May 21, 2023Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- Implementation for ACL 2024 paper "Meta-Task Prompting Elicits Embeddings from Large Language Models"☆12Jul 25, 2024Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆55Feb 24, 2026Updated last month
- ☆145Jul 21, 2024Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 5 months ago
- ☆20Oct 31, 2022Updated 3 years ago
- ☆17Jun 11, 2025Updated 10 months ago
- Google DeepMind: Mixture of Depths Unofficial Implementation.☆12May 29, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Apr 21, 2025Updated 11 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆45Feb 28, 2026Updated last month
- Official code for the paper CipherDAug: Ciphertext based Data Augmentation for Neural Machine Translation published at ACL 2022 main conf…☆12Apr 6, 2023Updated 3 years ago
- ☆19Sep 15, 2022Updated 3 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated 2 years ago
- ☆23Dec 11, 2024Updated last year
- Code for this paper "HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork"☆33Nov 29, 2023Updated 2 years ago
- [ECCV 2024] Official pytorch implementation of "Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts"☆47Jul 4, 2024Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆129Feb 4, 2026Updated 2 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆78Jun 23, 2025Updated 9 months ago
- Variance Covariance Regularization☆14Jun 22, 2023Updated 2 years ago
- [ICLR 2022] "Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How" by Yuning You, Yue Cao, Tianl…☆14Aug 19, 2022Updated 3 years ago
- ☆13Oct 29, 2021Updated 4 years ago
- Official PyTorch implementation of CD-MOE☆12Mar 18, 2026Updated 3 weeks ago
- Code for ICML 2024 paper☆36Sep 18, 2025Updated 6 months ago
- A fast MoE impl for PyTorch☆1,847Feb 10, 2025Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆39Jun 11, 2025Updated 10 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆84Nov 10, 2025Updated 5 months ago
- Code Repository for the NeurIPS 2024 Paper "Toward Efficient Inference for Mixture of Experts".☆19Oct 30, 2024Updated last year
- ☆91Aug 18, 2024Updated last year
- ☆19May 2, 2024Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆159Jul 9, 2025Updated 9 months ago
- Structured Pruning Adapters in PyTorch☆19Aug 30, 2023Updated 2 years ago