huawei-noah / Efficient-NLPLinks
☆95Updated last year
Alternatives and similar repositories for Efficient-NLP
Users that are interested in Efficient-NLP are comparing it to the libraries listed below
Sorting:
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆143Updated 3 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆136Updated 2 years ago
- ☆130Updated 3 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- ☆143Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- ☆34Updated 2 years ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆77Updated last year
- [ICLR 2023] "Learning to Grow Pretrained Models for Efficient Transformer Training" by Peihao Wang, Rameswar Panda, Lucas Torroba Hennige…☆92Updated last year
- Retrieval as Attention☆82Updated 3 years ago
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆26Updated last year
- ☆35Updated last year
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆92Updated 2 years ago
- AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning (Zhou et al.; TACL 2024)☆51Updated last year
- Efficient Transformers with Dynamic Token Pooling☆67Updated 2 years ago
- ☆20Updated 3 years ago
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆76Updated last year
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- Transformers at any scale☆42Updated 2 years ago
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆46Updated 2 years ago
- Code Implementation for "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP …☆17Updated 2 years ago
- The original Backpack Language Model implementation, a fork of FlashAttention☆71Updated 2 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Updated 3 years ago
- ☆14Updated 2 years ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆58Updated 11 months ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆106Updated last year
- Code for paper 'Data-Efficient FineTuning'☆28Updated 2 years ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year