lena-voita / the-story-of-heads
This is a repository with the code for the ACL 2019 paper "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned" and the ACL 2021 paper "Analyzing Source and Target Contributions to NMT Predictions".
☆313Updated 3 years ago
Alternatives and similar repositories for the-story-of-heads
Users that are interested in the-story-of-heads are comparing it to the libraries listed below
Sorting:
- Code for the paper "Are Sixteen Heads Really Better than One?"☆171Updated 5 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆195Updated 2 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated last year
- Understanding the Difficulty of Training Transformers☆329Updated 2 years ago
- A masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a…☆242Updated 3 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆313Updated last year
- ☆489Updated last year
- ☆319Updated 3 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆264Updated 2 years ago
- Tracking the progress in non-autoregressive generation (translation, transcription, etc.)☆307Updated 2 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆105Updated 4 years ago
- ☆291Updated 2 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated last year
- Neural Text Generation with Unlikelihood Training☆309Updated 3 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆105Updated 6 years ago
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆140Updated 3 years ago
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆156Updated 3 years ago
- ☆315Updated 2 years ago
- ☆363Updated 2 years ago
- ☆463Updated 4 years ago
- ☆84Updated last year
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆329Updated last year
- PyTorch code for EMNLP 2020 Paper "Vokenization: Improving Language Understanding with Visual Supervision"☆188Updated 4 years ago
- Multi30k Dataset☆178Updated 3 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆226Updated 4 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆436Updated 10 months ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆65Updated 3 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆202Updated 5 years ago
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆167Updated 3 years ago