dreamgonfly / BERT-pytorch
PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
☆100Updated 6 years ago
Alternatives and similar repositories for BERT-pytorch:
Users that are interested in BERT-pytorch are comparing it to the libraries listed below
- A PyTorch implementation of Transformer in "Attention is All You Need"☆104Updated 4 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated last year
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆61Updated 2 years ago
- Implementation of Mixout with PyTorch☆74Updated 2 years ago
- The PyTorch implementation the Smooth Grad [https://arxiv.org/pdf/1706.03825.pdf] and Integrated Gradients [https://arxiv.org/pdf/1703.01…☆46Updated 4 years ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆65Updated 3 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆120Updated 4 years ago
- [ACL 2020] Structure-Level Knowledge Distillation For Multilingual Sequence Labeling☆71Updated 2 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Updated last year
- A reference-free metric for measuring summary quality, learned from human ratings.☆42Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated last year
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆167Updated 2 years ago
- An Unsupervised Sentence Embedding Method by Mutual Information Maximization (EMNLP2020)☆61Updated 4 years ago
- source code for paper: WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach.☆56Updated 3 years ago
- ☆82Updated 4 years ago
- ☆66Updated 3 years ago
- A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.☆71Updated 2 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆131Updated 9 months ago
- Few-shot NLP benchmark for unified, rigorous eval☆91Updated 2 years ago
- A library to conduct ranking experiments with transformers.☆161Updated last year
- Code for NAACL 2021 full paper "Efficient Attentions for Long Document Summarization"☆66Updated 3 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆135Updated last year
- EMNLP BlackBox NLP 2020: Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples☆23Updated 4 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆61Updated last year
- ☆53Updated 7 years ago
- [NAACL 2021] Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering☆36Updated 3 years ago
- ☆43Updated 5 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆171Updated 4 years ago
- This is a niche collection of research papers which are proven to be gradients pushing the field of Natural Language Processing, Deep Lea…☆25Updated 3 months ago