JetRunner / MetaDistilLinks
Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".
☆87Updated 3 years ago
Alternatives and similar repositories for MetaDistil
Users that are interested in MetaDistil are comparing it to the libraries listed below
Sorting:
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆62Updated 4 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆41Updated 3 years ago
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- ☆67Updated last year
- FlatNCE: A Novel Contrastive Representation Learning Objective☆89Updated 4 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Updated 3 years ago
- ☆158Updated 4 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆31Updated 2 years ago
- ☆33Updated 4 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆104Updated 3 years ago
- [ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners☆130Updated 3 years ago
- ☆32Updated 3 years ago
- ICLR 2022☆18Updated 3 years ago
- Official code for the paper "PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains".☆51Updated 3 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆41Updated 4 years ago
- Implementation of the research paper Consistent Representation Learning for Continual Relation Extraction (Findings of ACL 2022)☆26Updated 3 years ago
- code for promptCSE, emnlp 2022☆11Updated 2 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆29Updated 2 years ago
- Official Repository for CLRCMD (Appear in ACL2022)☆42Updated 2 years ago
- Code for the ACL 2022 paper "Continual Sequence Generation with Adaptive Compositional Modules"☆39Updated 3 years ago
- [NeurIPS 2022] Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings☆22Updated 2 years ago
- Advances of few-shot learning, especially for NLP applications.☆32Updated 2 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated 2 years ago
- ☆21Updated 4 years ago
- Codes for the paper: "Continual Learning for Text Classification with Information Disentanglement Based Regularization"☆44Updated 2 years ago
- ☆107Updated 3 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆105Updated 3 years ago
- Source code for paper "Contrastive Out-of-Distribution Detection for Pretrained Transformers", EMNLP 2021☆40Updated 4 years ago
- ☆64Updated 3 years ago