[ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal, Shiwei Liu, Zhangyang Wang
☆56Feb 28, 2023Updated 3 years ago
Alternatives and similar repositories for Random-MoE-as-Dropout
Users that are interested in Random-MoE-as-Dropout are comparing it to the libraries listed below
Sorting:
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- Mixture of Attention Heads☆51Oct 10, 2022Updated 3 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆64Oct 7, 2021Updated 4 years ago
- Official code for the paper "Attention as a Hypernetwork"☆51Feb 24, 2026Updated last week
- [ACL 2023 Findings] Emergent Modularity in Pre-trained Transformers☆26Jun 7, 2023Updated 2 years ago
- ☆29May 24, 2024Updated last year
- ☆17Jun 11, 2025Updated 8 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Apr 21, 2025Updated 10 months ago
- ☆143Jul 21, 2024Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- ☆19Oct 31, 2022Updated 3 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated 2 years ago
- ☆122Feb 4, 2026Updated last month
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":