Hunter-DDM / stablemoeLinks
Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"
☆49Updated 3 years ago
Alternatives and similar repositories for stablemoe
Users that are interested in stablemoe are comparing it to the libraries listed below
Sorting:
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 3 years ago
- ☆139Updated last year
- [NAACL 2022] "Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training", Yuanxin Liu, Fandong Meng, Zheng Lin, Pe…☆15Updated 2 years ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆77Updated 2 years ago
- ☆31Updated 2 years ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆38Updated last year
- ☆53Updated last year
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 2 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated 2 years ago
- Methods and evaluation for aligning language models temporally☆29Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- ☆32Updated 3 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 3 years ago
- ☆54Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 4 years ago
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆44Updated last year
- ☆86Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆66Updated 2 years ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆74Updated 10 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆54Updated 2 years ago
- ☆20Updated 4 years ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- ☆105Updated last month
- ☆157Updated 4 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆23Updated last year