maknotavailable / pytorch-pretrained-BERTLinks
A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.
☆71Updated 3 years ago
Alternatives and similar repositories for pytorch-pretrained-BERT
Users that are interested in pytorch-pretrained-BERT are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆110Updated 7 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆228Updated 4 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 4 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆136Updated 2 years ago
- A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.☆35Updated 6 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago
- AAAI-20 paper: Cross-Lingual Natural Language Generation via Pre-Training☆129Updated 4 years ago
- Unicoder model for understanding and generation.☆92Updated last year
- Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".☆128Updated 4 years ago
- ☆180Updated 3 years ago
- An Inplementation of CRF (Conditional Random Fields) in PyTorch 1.0☆137Updated 5 years ago
- Repository for the paper "Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning"☆109Updated 5 years ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆121Updated 4 years ago
- Source codes for the paper "Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization"☆91Updated 2 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆130Updated 4 years ago
- ☆96Updated 5 years ago
- ☆25Updated 5 years ago
- Code for ACL 2019 paper: "Searching for Effective Neural Extractive Summarization: What Works and What's Next"☆91Updated 4 years ago
- ☆81Updated 4 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆173Updated 5 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆167Updated 3 years ago
- roBERTa training for SQuAD☆50Updated 5 years ago
- ☆254Updated 3 years ago
- This is a list of open-source projects at Microsoft Research NLP Group☆110Updated 5 years ago
- Contextual augmentation, a text data augmentation using a bidirectional language model.☆192Updated 5 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆535Updated 4 years ago
- Source code for our "TitleStylist" paper at ACL 2020☆77Updated last year
- Few-shot Natural Language Generation for Task-Oriented Dialog☆189Updated 3 years ago
- CIKM 2020: Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots☆74Updated 5 years ago