dreamgonfly / BERT-pytorchLinks
PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
☆110Updated 7 years ago
Alternatives and similar repositories for BERT-pytorch
Users that are interested in BERT-pytorch are comparing it to the libraries listed below
Sorting:
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 5 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆174Updated 5 years ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆66Updated 4 years ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆121Updated 4 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆134Updated last year
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆136Updated 2 years ago
- A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.☆35Updated 6 years ago
- A PyTorch implementation of Google AI's BERT model provided with Google's pre-trained models, examples and utilities.☆71Updated 3 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆185Updated 2 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated 3 months ago
- Code for the paper "Efficient Adaption of Pretrained Transformers for Abstractive Summarization"☆71Updated 6 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 4 years ago
- [ACL 2020] Structure-Level Knowledge Distillation For Multilingual Sequence Labeling☆72Updated 3 years ago
- A BART version of an open-domain QA model in a closed-book setup☆119Updated 5 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆137Updated 2 years ago
- Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".☆129Updated 4 years ago
- Implementation of Mixout with PyTorch☆75Updated 3 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆127Updated 4 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"☆18Updated 3 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆57Updated 3 years ago
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆161Updated 3 years ago
- a large scientific paraphrase dataset for longer paraphrase generation☆39Updated 3 years ago
- ☆99Updated 5 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated 2 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Updated 2 years ago
- ☆81Updated 5 years ago
- Evidence-based QA system for community question answering.☆112Updated 4 years ago
- ☆70Updated 5 years ago
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆84Updated 6 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 5 years ago