jongwooko / Pytorch-MiniLM
Unofficial Pytorch implementation of MiniLM and MiniLMv2
☆21Updated 3 years ago
Alternatives and similar repositories for Pytorch-MiniLM:
Users that are interested in Pytorch-MiniLM are comparing it to the libraries listed below
- ☆35Updated last year
- ☆65Updated 9 months ago
- The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach …☆62Updated 4 years ago
- Code for ACL 2021 paper "Unsupervised Out-of-Domain Detection via Pre-trained Transformers"☆30Updated 3 years ago
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated last year
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆193Updated last year
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆57Updated 3 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆85Updated 2 years ago
- Official PyTorch Implementation of SSMix (Findings of ACL 2021)☆62Updated 3 years ago
- Official Repository for CLRCMD (Appear in ACL2022)☆40Updated 2 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆93Updated 2 years ago
- [EMNLP 2022] Differentiable Data Augmentation for Contrastive Sentence Representation Learning. https://arxiv.org/abs/2210.16536☆38Updated 2 years ago
- Source Code for <Target-Side Data Augmentation for Sequence Generation>☆12Updated 3 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Updated last year
- Official Code for 'EPiDA: An Easy Plug-in Data Augmentation Framework for High Performance Text Classification' - NAACL 2022☆23Updated 2 years ago
- ☆28Updated 3 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆100Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆101Updated 2 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆45Updated 2 years ago
- ☆14Updated 5 years ago
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- Code for AAAI 2022 paper Unsupervised Sentence Representation via Contrastive Learning with Mixing Negatives☆23Updated 2 years ago
- Exploring mixup strategies for text classification☆30Updated 4 years ago
- ☆78Updated 2 years ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆23Updated 3 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆45Updated 2 years ago
- ☆116Updated 2 years ago
- Calculating FLOPs of Pre-trained Models in NLP☆18Updated 3 years ago
- ☆34Updated 2 years ago