[ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal, Shiwei Liu, Zhangyang Wang
☆56Feb 28, 2023Updated 3 years ago
Alternatives and similar repositories for Random-MoE-as-Dropout
Users that are interested in Random-MoE-as-Dropout are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- Mixture of Attention Heads☆52Oct 10, 2022Updated 3 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆64Oct 7, 2021Updated 4 years ago
- ☆29May 24, 2024Updated last year
- [ACL 2023 Findings] Emergent Modularity in Pre-trained Transformers☆26Jun 7, 2023Updated 2 years ago
- ☆13May 21, 2023Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- Implementation for ACL 2024 paper "Meta-Task Prompting Elicits Embeddings from Large Language Models"☆12Jul 25, 2024Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆55Feb 24, 2026Updated last month
- ☆145Jul 21, 2024Updated last year
- ☆19Oct 31, 2022Updated 3 years ago
- ☆17Jun 11, 2025Updated 9 months ago
- Google DeepMind: Mixture of Depths Unofficial Implementation.☆12May 29, 2024Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Apr 21, 2025Updated 11 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆44Feb 28, 2026Updated 3 weeks ago
- Official code for the paper CipherDAug: Ciphertext based Data Augmentation for Neural Machine Translation published at ACL 2022 main conf…☆12Apr 6, 2023Updated 2 years ago
- ☆19Sep 15, 2022Updated 3 years ago
- The official implementation of the paper "Rethinking Pruning for Vision-Language Models: Strategies for Effective Sparsity".☆15Jul 2, 2024Updated last year
- Code for this paper "HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork"☆33Nov 29, 2023Updated 2 years ago
- HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation [ACL 2023]☆14Jul 11, 2023Updated 2 years ago
- [ECCV 2024] Official pytorch implementation of "Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts"☆47Jul 4, 2024Updated last year
- ☆125Feb 4, 2026Updated last month
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆76Jun 23, 2025Updated 9 months ago
- Variance Covariance Regularization☆14Jun 22, 2023Updated 2 years ago
- [ICLR 2022] "Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How" by Yuning You, Yue Cao, Tianl…☆14Aug 19, 2022Updated 3 years ago
- ☆13Oct 29, 2021Updated 4 years ago
- Official PyTorch implementation of CD-MOE☆12Mar 18, 2026Updated last week
- A fast MoE impl for PyTorch☆1,846Feb 10, 2025Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆39Jun 11, 2025Updated 9 months ago
- ☆84Nov 10, 2025Updated 4 months ago
- ☆91Aug 18, 2024Updated last year
- Code Repository for the NeurIPS 2024 Paper "Toward Efficient Inference for Mixture of Experts".☆19Oct 30, 2024Updated last year
- ☆19May 2, 2024Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆156Jul 9, 2025Updated 8 months ago
- Structured Pruning Adapters in PyTorch☆19Aug 30, 2023Updated 2 years ago
- Code repository for "RL Grokking Recipe: How RL Unlocks and Transfers New Algorithms in LLMs""☆32Oct 12, 2025Updated 5 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆57Nov 20, 2024Updated last year
- [EMNLP 2024] Multi-modal reasoning problems via code generation.☆28Feb 5, 2025Updated last year