pmichel31415 / are-16-heads-really-better-than-1Links
Code for the paper "Are Sixteen Heads Really Better than One?"
☆173Updated 5 years ago
Alternatives and similar repositories for are-16-heads-really-better-than-1
Users that are interested in are-16-heads-really-better-than-1 are comparing it to the libraries listed below
Sorting:
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆141Updated 3 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆113Updated 6 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated 2 months ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆118Updated 5 years ago
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆84Updated 6 years ago
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆160Updated 3 years ago
- A masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a…☆245Updated 4 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 5 years ago
- ☆119Updated 6 years ago
- ☆96Updated 5 years ago
- ☆219Updated 5 years ago
- Code for ACL2020 "Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation"☆39Updated 5 years ago
- Implementation of ICLR 2020 paper "Revisiting Self-Training for Neural Sequence Generation"☆46Updated 3 years ago
- Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".☆128Updated 4 years ago
- This is a repository with the code for the ACL 2019 paper "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, t…☆315Updated 4 years ago
- Tutorials on training and testing retrieval-based models (DrQA & DPR)☆51Updated 4 years ago
- The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach …☆63Updated 5 years ago
- ☆48Updated 5 years ago
- Pytorch implementation of "A Probabilistic Formulation of Unsupervised Text Style Transfer" by He. et. al. at ICLR 2020☆163Updated 3 years ago
- [ACL‘20] Highway Transformer: A Gated Transformer.☆32Updated 3 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 3 years ago
- Code to support the paper "Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets"☆66Updated 4 years ago
- Neural Text Generation with Unlikelihood Training☆309Updated 4 years ago
- Distilling BERT using natural language generation.☆38Updated 2 years ago
- A reference-free metric for measuring summary quality, learned from human ratings.☆43Updated 2 years ago
- Sequence-Level Mixed Sample Data Augmentation☆22Updated 4 years ago
- ☆62Updated 3 years ago