yzh119 / BPT
Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"
☆128Updated 4 years ago
Alternatives and similar repositories for BPT:
Users that are interested in BPT are comparing it to the libraries listed below
- ☆97Updated 4 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated last year
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆116Updated 4 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 5 years ago
- ☆83Updated 5 years ago
- ☆93Updated 5 years ago
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆148Updated 5 years ago
- ☆69Updated 4 years ago
- Code for "Graph-to-Sequence Learning using Gated Graph Neural Networks"☆124Updated 4 years ago
- Re-implement "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"☆120Updated 6 years ago
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆82Updated 6 years ago
- Visualization for simple attention and Google's multi-head attention.☆67Updated 7 years ago
- Graph to sequence implemented in Pytorch combining Graph convolutional networks and opennmt-py☆151Updated 5 years ago
- Non-Monotonic Sequential Text Generation (ICML 2019)☆72Updated 6 years ago
- Checking the interpretability of attention on text classification models☆48Updated 5 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- Implementation of the paper Tree Transformer☆214Updated 4 years ago
- ☆50Updated last year
- Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning (authors' MXNet implementation for the TACL19 paper)☆78Updated 4 years ago
- souce code for "Accelerating Neural Transformer via an Average Attention Network"☆78Updated 5 years ago
- Codes for our paper at EMNLP2019☆36Updated 5 years ago
- Source Code for DialogWAE: Multimodal Response Generation with Conditional Wasserstein Autoencoder (https://arxiv.org/abs/1805.12352)☆125Updated 6 years ago
- A Toolkit for Training, Tracking, Saving Models and Syncing Results☆61Updated 5 years ago
- Code for the ACL 2019 paper ``A Hierarchical Reinforced Sequence Operation Method for Unsupervised Text Style Transfer``☆45Updated 9 months ago
- Source code to reproduce the results in the ACL 2019 paper "Syntactically Supervised Transformers for Faster Neural Machine Translation"☆81Updated 2 years ago
- Reproduce the results of paper "Compressing Word Embeddings via Deep Compositional Code Learning" accepted ICLR 2018☆23Updated 6 years ago
- PyTorch implementation of Transformer-based Neural Machine Translation☆78Updated 2 years ago
- ☆38Updated 5 years ago
- ☆93Updated 3 years ago