thunlp / MoEficationLinks
☆139Updated last year
Alternatives and similar repositories for MoEfication
Users that are interested in MoEfication are comparing it to the libraries listed below
Sorting:
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆110Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆54Updated 2 years ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆77Updated 2 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆48Updated 3 years ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆38Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 2 years ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆91Updated 2 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆62Updated 10 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆157Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆74Updated 9 months ago
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆42Updated last year
- Long Context Extension and Generalization in LLMs☆58Updated 11 months ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆219Updated 5 months ago
- Repo for ACL2023 Findings paper "Emergent Modularity in Pre-trained Transformers"☆25Updated 2 years ago
- ☆53Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆207Updated last year
- ☆269Updated last year
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 3 years ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆79Updated last year
- ☆104Updated last month
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆148Updated 5 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆201Updated 6 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆119Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆141Updated 11 months ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆57Updated 9 months ago
- [ICLR 2023] "Learning to Grow Pretrained Models for Efficient Transformer Training" by Peihao Wang, Rameswar Panda, Lucas Torroba Hennige…☆92Updated last year