SimiaoZuo / MoEBERT
This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).
☆97Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for MoEBERT
- ☆107Updated 3 months ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆191Updated last year
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆40Updated 2 years ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆52Updated last month
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆33Updated 11 months ago
- Retrieval as Attention☆82Updated last year
- This package implements THOR: Transformer with Stochastic Experts.☆61Updated 3 years ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆133Updated last month
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆125Updated last year
- ACL'23: Unified Demonstration Retriever for In-Context Learning☆33Updated 11 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆142Updated 4 months ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆85Updated 9 months ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆133Updated 2 years ago
- ☆89Updated last month
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆42Updated 2 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆101Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆44Updated last year
- [KDD'22] Learned Token Pruning for Transformers☆93Updated last year
- Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method ; GKD: A General Knowledge Distillation…☆31Updated last year
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆74Updated 3 weeks ago
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆260Updated 2 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆97Updated last year
- AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning (Zhou et al.; TACL)☆42Updated 7 months ago
- ☆37Updated 6 months ago
- ☆85Updated 5 months ago
- ☆78Updated last year
- ☆35Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆141Updated 4 months ago
- DSIR large-scale data selection framework for language model training☆227Updated 7 months ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆58Updated 2 years ago