FreedomIntelligence / complex-orderLinks
☆83Updated 5 years ago
Alternatives and similar repositories for complex-order
Users that are interested in complex-order are comparing it to the libraries listed below
Sorting:
- The code of Encoding Word Order in Complex-valued Embedding☆42Updated 6 years ago
- ☆93Updated 5 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- This repo provides the code for the ACL 2020 paper "Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEnco…☆55Updated 4 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- Visualization for simple attention and Google's multi-head attention.☆67Updated 7 years ago
- Pytorch implementation of the methods proposed in **Adversarial Training Methods for Semi-Supervised Text Classification** on IMDB datase…☆42Updated 6 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated 2 years ago
- Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction☆20Updated 3 years ago
- An Unsupervised Sentence Embedding Method by Mutual Information Maximization (EMNLP2020)☆61Updated 4 years ago
- Codes for our paper at EMNLP2019☆36Updated 5 years ago
- ☆50Updated 2 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆118Updated 4 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆131Updated 4 years ago
- Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".☆131Updated 4 years ago
- Code for ACL2020 "Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation"☆39Updated 5 years ago
- The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach …☆63Updated 4 years ago
- Code implementation of paper Towards A Deep and Unified Understanding of Deep Neural Models in NLP☆73Updated 6 years ago
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆83Updated 6 years ago
- Code for AAAI2020 paper "Graph Transformer for Graph-to-Sequence Learning"☆190Updated last year
- a pytorch implementation of self-attention with relative position representations☆50Updated 4 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning (authors' MXNet implementation for the TACL19 paper)☆78Updated 4 years ago
- PyTorch implementation of the paper "Hyperbolic Interaction Model For Hierarchical Multi-Label Classification"☆47Updated 5 years ago
- The official Keras implementation of ACL 2020 paper "Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-En…☆48Updated 2 years ago
- Source Code for DialogWAE: Multimodal Response Generation with Conditional Wasserstein Autoencoder (https://arxiv.org/abs/1805.12352)☆125Updated 6 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- ☆28Updated 3 years ago