JetRunner / MetaDistilLinks
Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".
☆87Updated 3 years ago
Alternatives and similar repositories for MetaDistil
Users that are interested in MetaDistil are comparing it to the libraries listed below
Sorting:
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆62Updated 4 years ago
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- ☆67Updated last year
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆41Updated 3 years ago
- ☆157Updated 4 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆31Updated 2 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Updated 3 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆104Updated 3 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 4 years ago
- Official Repository for CLRCMD (Appear in ACL2022)☆43Updated 2 years ago
- Codes for the paper: "Continual Learning for Text Classification with Information Disentanglement Based Regularization"☆44Updated 2 years ago
- [ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners☆130Updated 3 years ago
- code for promptCSE, emnlp 2022☆11Updated 2 years ago
- ☆32Updated 3 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated 2 years ago
- ☆108Updated 3 years ago
- ☆33Updated 4 years ago
- Official code for the paper "PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains".☆51Updated 3 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆43Updated 4 years ago
- Code for the ACL 2022 paper "Continual Sequence Generation with Adaptive Compositional Modules"☆39Updated 3 years ago
- ☆64Updated 3 years ago
- ICLR 2022☆18Updated 3 years ago
- ☆21Updated 4 years ago
- ☆73Updated 3 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆105Updated 3 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated 2 years ago
- 🎁[ChatGPT4NLU] A Comparative Study on ChatGPT and Fine-tuned BERT☆192Updated 2 years ago
- Released code for our ICLR23 paper.☆66Updated 2 years ago
- Official repo for "Imagination-Augmented Natural Language Understanding", NAACL 2022.☆17Updated 3 years ago