maknotavailable / pytorch-pretrained-BERT
A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.
☆68Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for pytorch-pretrained-BERT
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- Repository for the paper "Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning"☆109Updated 4 years ago
- [EMNLP 2019] Mixture Content Selection for Diverse Sequence Generation (Question Generation / Abstractive Summarization)☆114Updated 3 years ago
- A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.☆35Updated 5 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆103Updated 3 years ago
- Open source code for ACL 2020 Paper "Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog"☆104Updated 2 years ago
- Code for ACL 2019 paper: "Searching for Effective Neural Extractive Summarization: What Works and What's Next"☆91Updated 3 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆225Updated 3 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆115Updated 4 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆133Updated last year
- ☆24Updated 4 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆169Updated 4 years ago
- Dialogue Summarization☆52Updated 5 years ago
- ☆200Updated last year
- Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".☆130Updated 3 years ago
- Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue. In ICLR, 2020 (spotlight)☆129Updated 4 years ago
- roBERTa training for SQuAD☆51Updated 4 years ago
- AAAI-20 paper: Cross-Lingual Natural Language Generation via Pre-Training☆128Updated 3 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆199Updated 5 years ago
- Asian language bart models (En, Ja, Ko, Zh, ECJK)☆65Updated 3 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆132Updated 3 years ago
- ☆156Updated 3 months ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆97Updated 6 years ago
- ☆50Updated last year
- ☆175Updated 2 years ago
- Resources for the MRQA 2019 Shared Task☆292Updated 3 years ago
- Unicoder model for understanding and generation.☆88Updated 11 months ago
- Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus☆27Updated 5 years ago
- ☆83Updated 3 years ago