yitu-opensource / ConvBert
☆251Updated 2 years ago
Alternatives and similar repositories for ConvBert:
Users that are interested in ConvBert are comparing it to the libraries listed below
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆310Updated last year
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated last year
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆200Updated 5 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆226Updated 3 years ago
- The score code of FastBERT (ACL2020)☆604Updated 3 years ago
- This is the official code repository for NumNet+(https://leaderboard.allenai.org/drop/submission/blu418v76glsbnh1qvd0)☆177Updated 6 months ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆250Updated 3 years ago
- TensorFlow implementation of On the Sentence Embeddings from Pre-trained Language Models (EMNLP 2020)☆530Updated 3 years ago
- Codes for "TENER: Adapting Transformer Encoder for Named Entity Recognition"☆373Updated 4 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆171Updated 4 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆471Updated 2 years ago
- MPNet: Masked and Permuted Pre-training for Language Understanding https://arxiv.org/pdf/2004.09297.pdf☆285Updated 3 years ago
- Feel free to fine tune large BERT models with Multi-GPU and FP16 support.☆192Updated 4 years ago
- Adversarial Training for Natural Language Understanding☆252Updated last year
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆602Updated 6 months ago
- BERT as language model, fork from https://github.com/google-research/bert☆248Updated 10 months ago
- MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices☆65Updated 4 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆530Updated 3 years ago
- ☆213Updated 4 years ago
- Papers I have read, mainly about NLP. Welcome everyone to supplement in issue.☆256Updated 3 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆115Updated 4 years ago
- Facilitating the design, comparison and sharing of deep text matching models.☆496Updated 8 months ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆192Updated last year
- Leaderboards, Datasets and Papers for Multi-Turn Response Selection in Retrieval-Based Chatbots☆204Updated 3 years ago
- An unofficial implementation of Poly-encoder (Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate …☆247Updated last year
- [ACL 2020] DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering☆120Updated last year
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Updated last year
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆126Updated 3 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆467Updated 4 years ago