JetRunner / BERT-of-TheseusLinks
⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).
☆315Updated 2 years ago
Alternatives and similar repositories for BERT-of-Theseus
Users that are interested in BERT-of-Theseus are comparing it to the libraries listed below
Sorting:
- ☆254Updated 3 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 4 years ago
- The score code of FastBERT (ACL2020)☆609Updated 3 years ago
- TensorFlow implementation of On the Sentence Embeddings from Pre-trained Language Models (EMNLP 2020)☆534Updated 4 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- Adversarial Training for Natural Language Understanding☆253Updated 2 years ago
- This is the official code repository for NumNet+(https://leaderboard.allenai.org/drop/submission/blu418v76glsbnh1qvd0)☆176Updated last year
- Leaderboards, Datasets and Papers for Multi-Turn Response Selection in Retrieval-Based Chatbots☆203Updated 4 years ago
- The repo contains the code of the ACL2020 paper `Dice Loss for Data-imbalanced NLP Tasks`☆274Updated 2 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆118Updated 4 years ago
- ☆168Updated 3 years ago
- MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification☆357Updated 5 years ago
- [ACL 2020] DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering☆121Updated 2 years ago
- "Few-shot Text Classification with Distributional Signatures" ICLR 2020☆260Updated 4 years ago
- Codes for "TENER: Adapting Transformer Encoder for Named Entity Recognition"☆378Updated 5 years ago
- Repository for the paper "Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning"☆110Updated 4 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆227Updated 4 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆130Updated 4 years ago
- Feel free to fine tune large BERT models with Multi-GPU and FP16 support.☆192Updated 5 years ago
- ☆79Updated 3 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆537Updated 3 years ago
- A list of recent papers about Meta / few-shot learning methods applied in NLP areas.☆231Updated 4 years ago
- [EMNLP 2020] Text Classification Using Label Names Only: A Language Model Self-Training Approach☆300Updated 3 years ago
- 对ACL2020 FastBERT论文的复现,论文地址//arxiv.org/pdf/2004.02178.pdf☆194Updated 3 years ago
- An unofficial implementation of Poly-encoder (Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate …☆249Updated 2 years ago
- Semantics-aware BERT for Language Understanding (AAAI 2020)☆289Updated 2 years ago
- BERT as language model, fork from https://github.com/google-research/bert☆249Updated last year
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆252Updated 3 years ago
- Platform for few-shot natural language processing: Text Classification, Sequene Labeling.☆221Updated 3 years ago
- Collections of Chinese reading comprehension datasets☆220Updated 5 years ago