huawei-noah / Efficient-NLPLinks
☆95Updated last year
Alternatives and similar repositories for Efficient-NLP
Users that are interested in Efficient-NLP are comparing it to the libraries listed below
Sorting:
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 3 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆134Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- ☆130Updated 3 years ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆91Updated 2 years ago
- The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆55Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆135Updated last year
- Retrieval as Attention☆83Updated 2 years ago
- Transformers at any scale☆41Updated last year
- ☆139Updated last year
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)☆50Updated 2 years ago
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆26Updated last year
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆76Updated last year
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- On Transferability of Prompt Tuning for Natural Language Processing☆100Updated last year
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆47Updated last year
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆44Updated last year
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)☆64Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"