FreedomIntelligence / complex-orderLinks
☆84Updated 6 years ago
Alternatives and similar repositories for complex-order
Users that are interested in complex-order are comparing it to the libraries listed below
Sorting:
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated 2 years ago
- This repo provides the code for the ACL 2020 paper "Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEnco…☆57Updated 5 years ago
- The code of Encoding Word Order in Complex-valued Embedding☆42Updated 6 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆254Updated 4 years ago
- Code for ACL2020 "Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation"☆39Updated 5 years ago
- ☆94Updated 5 years ago
- ☆50Updated 2 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆118Updated 5 years ago
- a pytorch implementation of self-attention with relative position representations☆50Updated 4 years ago
- An Unsupervised Sentence Embedding Method by Mutual Information Maximization (EMNLP2020)☆61Updated 4 years ago
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆84Updated 6 years ago
- Pytorch implementation of the methods proposed in **Adversarial Training Methods for Semi-Supervised Text Classification** on IMDB datase…☆44Updated 6 years ago
- Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".☆129Updated 4 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 5 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆130Updated 4 years ago
- Visualization for simple attention and Google's multi-head attention.☆68Updated 7 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆58Updated 4 years ago
- Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction☆20Updated 3 years ago
- Implementation of the paper Tree Transformer☆214Updated 5 years ago
- The official Keras implementation of ACL 2020 paper "Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-En…☆48Updated 3 years ago
- Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning (authors' MXNet implementation for the TACL19 paper)☆78Updated 4 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆175Updated 5 years ago
- The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach …☆63Updated 5 years ago
- code for paper "Improving Sequence-to-Sequence Learning via Optimal Transport"☆68Updated 6 years ago
- Source Code for DialogWAE: Multimodal Response Generation with Conditional Wasserstein Autoencoder (https://arxiv.org/abs/1805.12352)☆126Updated 7 years ago
- Code for "Variational Template Machine for Data-to-text generation"☆31Updated 5 years ago
- Codebase for DualEnc (ACL-20)☆22Updated 2 years ago
- Code implementation of paper Towards A Deep and Unified Understanding of Deep Neural Models in NLP☆73Updated 6 years ago