yitu-opensource / ConvBert
☆251Updated 2 years ago
Alternatives and similar repositories for ConvBert:
Users that are interested in ConvBert are comparing it to the libraries listed below
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆311Updated last year
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆202Updated 5 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- MPNet: Masked and Permuted Pre-training for Language Understanding https://arxiv.org/pdf/2004.09297.pdf☆294Updated 3 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated last year
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆433Updated 2 years ago
- TensorFlow implementation of On the Sentence Embeddings from Pre-trained Language Models (EMNLP 2020)☆533Updated 3 years ago
- This is the official code repository for NumNet+(https://leaderboard.allenai.org/drop/submission/blu418v76glsbnh1qvd0)☆177Updated 9 months ago
- Codes for "TENER: Adapting Transformer Encoder for Named Entity Recognition"☆375Updated 4 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆472Updated 2 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆226Updated 4 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- Adversarial Training for Natural Language Understanding☆252Updated last year
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆529Updated 3 years ago
- The score code of FastBERT (ACL2020)☆604Updated 3 years ago
- ☆97Updated 4 years ago
- ☆218Updated 4 years ago
- Feel free to fine tune large BERT models with Multi-GPU and FP16 support.☆192Updated 5 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆116Updated 4 years ago
- multi-gpu pre-training in one machine for BERT without horovod (Data Parallelism)☆172Updated last month
- ☆166Updated 3 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆171Updated 5 years ago
- Semantics-aware BERT for Language Understanding (AAAI 2020)☆287Updated 2 years ago
- LAMB Optimizer for Large Batch Training (TensorFlow version)☆120Updated 5 years ago
- Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"☆293Updated 2 years ago
- ☆93Updated 3 years ago
- BERT as language model, fork from https://github.com/google-research/bert☆247Updated last year
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆607Updated 9 months ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Updated last year