JetRunner / BERT-of-TheseusLinks
⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).
☆313Updated 2 years ago
Alternatives and similar repositories for BERT-of-Theseus
Users that are interested in BERT-of-Theseus are comparing it to the libraries listed below
Sorting:
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆201Updated 5 years ago
- ☆251Updated 2 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- The score code of FastBERT (ACL2020)☆605Updated 3 years ago
- Codes for "TENER: Adapting Transformer Encoder for Named Entity Recognition"☆376Updated 4 years ago
- Adversarial Training for Natural Language Understanding☆252Updated last year
- TensorFlow implementation of On the Sentence Embeddings from Pre-trained Language Models (EMNLP 2020)☆532Updated 4 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆532Updated 3 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆226Updated 4 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆117Updated 4 years ago
- Implementation of the ESIM model for natural language inference with PyTorch☆368Updated 3 years ago
- An unofficial implementation of Poly-encoder (Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate …☆249Updated 2 years ago
- Semantics-aware BERT for Language Understanding (AAAI 2020)☆287Updated 2 years ago
- [ACL 2020] DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering☆120Updated 2 years ago
- This is the official code repository for NumNet+(https://leaderboard.allenai.org/drop/submission/blu418v76glsbnh1qvd0)☆177Updated 11 months ago
- MPNet: Masked and Permuted Pre-training for Language Understanding https://arxiv.org/pdf/2004.09297.pdf☆294Updated 3 years ago
- 对ACL2020 FastBERT论文的复现,论文地址//arxiv.org/pdf/2004.02178.pdf☆194Updated 3 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆133Updated 4 years ago
- Leaderboards, Datasets and Papers for Multi-Turn Response Selection in Retrieval-Based Chatbots☆203Updated 4 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆473Updated 3 years ago
- Code for ACL2020 paper: Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network☆153Updated 3 years ago
- a simple yet complete implementation of the popular BERT model☆127Updated 5 years ago
- Pytorch version of BERT-whitening☆307Updated 3 years ago
- ☆166Updated 2 years ago
- Platform for few-shot natural language processing: Text Classification, Sequene Labeling.☆219Updated 3 years ago
- BERT distillation(基于BERT的蒸馏实验 )☆313Updated 4 years ago
- The repo contains the code of the ACL2020 paper `Dice Loss for Data-imbalanced NLP Tasks`☆274Updated 2 years ago