JetRunner / MetaDistilLinks
Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".
☆86Updated 2 years ago
Alternatives and similar repositories for MetaDistil
Users that are interested in MetaDistil are comparing it to the libraries listed below
Sorting:
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆61Updated 3 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 3 years ago
- ☆66Updated last year
- ☆32Updated 3 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆41Updated 2 years ago
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆30Updated 2 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆61Updated 3 years ago
- ☆156Updated 3 years ago
- Implementation of the research paper Consistent Representation Learning for Continual Relation Extraction (Findings of ACL 2022)☆26Updated 3 years ago
- Official Repository for CLRCMD (Appear in ACL2022)☆42Updated 2 years ago
- Source code for paper "Contrastive Out-of-Distribution Detection for Pretrained Transformers", EMNLP 2021☆40Updated 3 years ago
- Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"☆53Updated 2 years ago
- [ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners☆132Updated 2 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆40Updated 4 years ago
- [EMNLP 2022] Differentiable Data Augmentation for Contrastive Sentence Representation Learning. https://arxiv.org/abs/2210.16536☆40Updated 2 years ago
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated 2 years ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆24Updated 3 years ago
- Code for ACL 2021 paper "Unsupervised Out-of-Domain Detection via Pre-trained Transformers"☆30Updated 3 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- code for promptCSE, emnlp 2022☆11Updated 2 years ago
- ☆64Updated 2 years ago
- ☆21Updated 4 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆104Updated 2 years ago
- Codes for the paper: "Continual Learning for Text Classification with Information Disentanglement Based Regularization"☆45Updated 2 years ago
- ☆73Updated 3 years ago
- ☆104Updated 3 years ago
- ICLR 2022☆17Updated 3 years ago
- ☆32Updated 3 years ago