maknotavailable / pytorch-pretrained-BERTLinks
A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.
☆71Updated 3 years ago
Alternatives and similar repositories for pytorch-pretrained-BERT
Users that are interested in pytorch-pretrained-BERT are comparing it to the libraries listed below
Sorting:
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 4 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆227Updated 4 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆109Updated 7 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆136Updated 2 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.☆35Updated 6 years ago
- ☆81Updated 4 years ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆121Updated 4 years ago
- ☆219Updated 5 years ago
- ☆179Updated 3 years ago
- Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".☆129Updated 4 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆117Updated 2 years ago
- MPNet: Masked and Permuted Pre-training for Language Understanding https://arxiv.org/pdf/2004.09297.pdf☆292Updated 4 years ago
- AAAI-20 paper: Cross-Lingual Natural Language Generation via Pre-Training☆129Updated 4 years ago
- Unicoder model for understanding and generation.☆91Updated last year
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paper☆109Updated 4 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆537Updated 3 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆173Updated 5 years ago
- X-Transformer: Taming Pretrained Transformers for eXtreme Multi-label Text Classification☆139Updated 4 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆167Updated 3 years ago
- Source code for our "TitleStylist" paper at ACL 2020☆77Updated last year
- Repository for the paper "Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning"☆110Updated 4 years ago
- ☆97Updated 5 years ago
- IEEE/ACM TASLP 2020: SBERT-WK: A Sentence Embedding Method By Dissecting BERT-based Word Models☆180Updated 4 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆330Updated last year
- Code for EMNLP-IJCNLP 2019 MRQA Workshop Paper: "Domain-agnostic Question-Answering with Adversarial Training"☆39Updated last year
- Implementation of paper "Learning to Encode Text as Human-Readable Summaries using GAN"☆66Updated 6 years ago
- ☆84Updated 5 years ago
- Code for the paper "True Few-Shot Learning in Language Models" (https://arxiv.org/abs/2105.11447)☆144Updated 4 years ago