☆143Jul 21, 2024Updated last year
Alternatives and similar repositories for MoEfication
Users that are interested in MoEfication are comparing it to the libraries listed below
Sorting:
- [ACL 2023 Findings] Emergent Modularity in Pre-trained Transformers☆26Jun 7, 2023Updated 2 years ago
- ☆13Oct 13, 2025Updated 4 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- ☆57Jun 10, 2024Updated last year
- ☆13Aug 23, 2024Updated last year
- ☆19Sep 15, 2022Updated 3 years ago
- Official PyTorch Implementation of EMoE: Unlocking Emergent Modularity in Large Language Models [main conference @ NAACL2024]☆39May 28, 2024Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆114May 2, 2022Updated 3 years ago
- ☆26May 30, 2023Updated 2 years ago
- Triton-based implementation of Sparse Mixture of Experts.☆266Oct 3, 2025Updated 5 months ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 8 months ago
- ☆91Aug 18, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,001Dec 6, 2024Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- ☆159Feb 15, 2025Updated last year
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆870Aug 20, 2024Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Sep 30, 2024Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆81Apr 10, 2023Updated 2 years ago
- ☆156Aug 24, 2021Updated 4 years ago
- [AAAI 2025] Augmenting Math Word Problems via Iterative Question Composing (https://arxiv.org/abs/2401.09003)☆23Oct 2, 2025Updated 5 months ago
- Awesome Chinese Corpus Datasets and Models.☆18Oct 28, 2019Updated 6 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆75Feb 3, 2021Updated 5 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆969Dec 21, 2025Updated 2 months ago
- Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Tra…☆33Sep 15, 2021Updated 4 years ago
- Code for ACL2022 publication Transkimmer: Transformer Learns to Layer-wise Skim☆22Aug 21, 2022Updated 3 years ago
- ☆707Dec 6, 2025Updated 2 months ago
- [NLPCC 2022] Kformer: Knowledge Injection in Transformer Feed-Forward Layers☆38Oct 20, 2022Updated 3 years ago
- ☆352Apr 2, 2024Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 3 months ago
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆134Nov 23, 2024Updated last year
- ☆22Nov 26, 2022Updated 3 years ago
- ☆29May 4, 2024Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆137Jul 8, 2024Updated last year
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆97Feb 6, 2024Updated 2 years ago
- ☆12Oct 9, 2023Updated 2 years ago
- Portal Tutorial☆11Feb 3, 2018Updated 8 years ago
- Effective Attention Sheds Light On Interpretability - Findings of ACL2021☆11May 16, 2021Updated 4 years ago
- Explanation Optimization☆13Oct 16, 2020Updated 5 years ago