SimiaoZuo / MoEBERT
This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).
☆103Updated 2 years ago
Alternatives and similar repositories for MoEBERT:
Users that are interested in MoEBERT are comparing it to the libraries listed below
- ☆124Updated 8 months ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆43Updated 2 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆45Updated 2 years ago
- ☆98Updated 5 months ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆195Updated last year
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆57Updated 5 months ago
- This package implements THOR: Transformer with Stochastic Experts.☆62Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated 2 years ago
- ACL'23: Unified Demonstration Retriever for In-Context Learning☆36Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆158Updated 9 months ago
- Retrieval as Attention☆83Updated 2 years ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆152Updated 9 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆45Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆130Updated last year
- ☆48Updated 11 months ago
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆267Updated 2 years ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆55Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆140Updated 2 years ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆36Updated 11 months ago
- ☆104Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Updated last year
- ☆39Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆89Updated last year
- DSIR large-scale data selection framework for language model training☆244Updated 11 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆173Updated last month
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆117Updated last month
- [SIGIR'24] The official implementation code of MOELoRA.☆153Updated 8 months ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆100Updated 2 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆260Updated last year