FreedomIntelligence / complex-orderLinks
☆83Updated 5 years ago
Alternatives and similar repositories for complex-order
Users that are interested in complex-order are comparing it to the libraries listed below
Sorting:
- This repo provides the code for the ACL 2020 paper "Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEnco…☆55Updated 4 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated 2 years ago
- Pytorch implementation of the methods proposed in **Adversarial Training Methods for Semi-Supervised Text Classification** on IMDB datase…☆43Updated 6 years ago
- ☆50Updated 2 years ago
- ☆94Updated 5 years ago
- The code of Encoding Word Order in Complex-valued Embedding☆42Updated 6 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆252Updated 3 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆118Updated 4 years ago
- Code for ACL2020 "Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation"☆39Updated 5 years ago
- PyTorch implementation of the paper "Hyperbolic Interaction Model For Hierarchical Multi-Label Classification"☆48Updated 6 years ago
- Codes for our paper at EMNLP2019☆36Updated 5 years ago
- Code for AAAI2020 paper "Graph Transformer for Graph-to-Sequence Learning"☆190Updated last year
- Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning (authors' MXNet implementation for the TACL19 paper)☆78Updated 4 years ago
- Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction☆20Updated 3 years ago
- Code implementation of paper Towards A Deep and Unified Understanding of Deep Neural Models in NLP☆73Updated 6 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆185Updated 2 years ago
- Dialogue Relation Extraction with Document-level Heterogeneous Graph Attention Networks☆55Updated 2 years ago
- Visualization for simple attention and Google's multi-head attention.☆68Updated 7 years ago
- Code & data accompanying the ICLR 2020 paper "Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation"☆125Updated 3 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆130Updated 4 years ago
- An Unsupervised Sentence Embedding Method by Mutual Information Maximization (EMNLP2020)☆61Updated 4 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 4 years ago
- Variational Transformers for Diverse Response Generation☆81Updated last year
- MASKER: Masked Keyword Regularization for Reliable Text Classification (AAAI 2021)☆54Updated last year
- Source Code for DialogWAE: Multimodal Response Generation with Conditional Wasserstein Autoencoder (https://arxiv.org/abs/1805.12352)☆126Updated 7 years ago
- ☆47Updated 5 years ago
- a pytorch implementation of self-attention with relative position representations☆50Updated 4 years ago
- Heterogeneous Graph Transformer for Graph-to-Sequence Learning☆48Updated 4 years ago