maknotavailable / pytorch-pretrained-BERTLinks
A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.
☆71Updated 3 years ago
Alternatives and similar repositories for pytorch-pretrained-BERT
Users that are interested in pytorch-pretrained-BERT are comparing it to the libraries listed below
Sorting:
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆227Updated 4 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆108Updated 6 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 4 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆134Updated 2 years ago
- Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".☆131Updated 4 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆185Updated 2 years ago
- Unicoder model for understanding and generation.☆91Updated last year
- Repository for the paper "Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning"☆110Updated 4 years ago
- A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.☆35Updated 6 years ago
- ☆81Updated 4 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆117Updated 2 years ago
- Source code for our "TitleStylist" paper at ACL 2020☆77Updated last year
- ☆219Updated 5 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆252Updated 3 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆168Updated 2 years ago
- Pytorch Implementation of "Adaptive Co-attention Network for Named Entity Recognition in Tweets" (AAAI 2018)☆58Updated last year
- AAAI-20 paper: Cross-Lingual Natural Language Generation via Pre-Training☆129Updated 4 years ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆121Updated 4 years ago
- X-Transformer: Taming Pretrained Transformers for eXtreme Multi-label Text Classification☆138Updated 4 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆131Updated 4 years ago
- ☆178Updated 3 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆533Updated 3 years ago
- Code for ACL 2019 paper: "Searching for Effective Neural Extractive Summarization: What Works and What's Next"☆90Updated 4 years ago
- [EMNLP 2019] Mixture Content Selection for Diverse Sequence Generation (Question Generation / Abstractive Summarization)☆113Updated 4 years ago
- reference pytorch code for named entity tagging☆86Updated 10 months ago
- Plot the vector graph of attention based text visualisation☆371Updated 6 years ago
- ☆97Updated 5 years ago
- Source codes for the paper "Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization"☆91Updated last year
- SpanNER: Named EntityRe-/Recognition as Span Prediction☆131Updated 3 years ago