pmichel31415 / are-16-heads-really-better-than-1Links
Code for the paper "Are Sixteen Heads Really Better than One?"
☆172Updated 5 years ago
Alternatives and similar repositories for are-16-heads-really-better-than-1
Users that are interested in are-16-heads-really-better-than-1 are comparing it to the libraries listed below
Sorting:
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆158Updated 3 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated 3 months ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆185Updated 2 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆141Updated 3 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆118Updated 4 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆113Updated 6 years ago
- This is a repository with the code for the ACL 2019 paper "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, t…☆313Updated 4 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 4 years ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆65Updated 4 years ago
- Implementation of ICLR 2020 paper "Revisiting Self-Training for Neural Sequence Generation"☆46Updated 3 years ago
- ☆97Updated 5 years ago
- ☆219Updated 5 years ago
- Official Repository for "The Curious Case of Neural Text Degeneration"☆164Updated 2 years ago
- Code for ACL2020 "Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation"☆39Updated 5 years ago
- Distilling BERT using natural language generation.☆38Updated 2 years ago
- Tutorials on training and testing retrieval-based models (DrQA & DPR)☆51Updated 4 years ago
- ☆48Updated 5 years ago
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆83Updated 6 years ago
- ☆21Updated 5 years ago
- Source code to reproduce the results in the ACL 2019 paper "Syntactically Supervised Transformers for Faster Neural Machine Translation"☆81Updated 2 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- A reference-free metric for measuring summary quality, learned from human ratings.☆43Updated 2 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 3 years ago
- The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach …☆63Updated 4 years ago
- Code to support the paper "Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets"☆66Updated 3 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 4 years ago
- ☆62Updated 3 years ago
- roBERTa training for SQuAD☆50Updated 5 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆203Updated 5 years ago