leimao / Two-Layer-Hierarchical-Softmax-PyTorchLinks
Two-Layer Hierarchical Softmax Implementation for PyTorch
☆69Updated 4 years ago
Alternatives and similar repositories for Two-Layer-Hierarchical-Softmax-PyTorch
Users that are interested in Two-Layer-Hierarchical-Softmax-PyTorch are comparing it to the libraries listed below
Sorting:
- Source Code for DialogWAE: Multimodal Response Generation with Conditional Wasserstein Autoencoder (https://arxiv.org/abs/1805.12352)☆125Updated 6 years ago
- Beam search for neural network sequence to sequence (encoder-decoder) models.☆34Updated 6 years ago
- PyTorch DataLoader for seq2seq☆85Updated 6 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated last year
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- ☆93Updated 3 years ago
- Re-implement "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"☆120Updated 6 years ago
- Adaptive Softmax implementation for PyTorch☆80Updated 6 years ago
- Encoding position with the word embeddings.☆83Updated 7 years ago
- PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset☆123Updated 5 years ago
- Label Embedding Network☆91Updated 7 years ago
- Implementing Skip-gram Negative Sampling with pytorch☆49Updated 6 years ago
- Code for "Strong Baselines for Neural Semi-supervised Learning under Domain Shift" (Ruder & Plank, 2018 ACL)☆61Updated 2 years ago
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆82Updated 6 years ago
- Reproducing Character-Level-Language-Modeling with Deeper Self-Attention in PyTorch☆61Updated 6 years ago
- PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"☆269Updated 3 years ago
- Implementation of Densely Connected Attention Propagation for Reading Comprehension (NIPS 2018)☆69Updated 6 years ago
- ☆83Updated 5 years ago
- A PyTorch implementation of a Bi-LSTM CRF with character-level features☆63Updated 6 years ago
- ☆13Updated 6 years ago
- Pytorch implementation of "Dynamic Coattention Networks For Question Answering"☆62Updated 6 years ago
- Visualization for simple attention and Google's multi-head attention.☆67Updated 7 years ago
- Highway network implemented in pytorch☆80Updated 8 years ago
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆136Updated 4 years ago
- Code for "Language GANs Falling Short"☆59Updated 4 years ago
- Sequence to Sequence Models in PyTorch☆44Updated 9 months ago
- Highway Networks implement in pytorch☆71Updated 2 years ago
- ☆53Updated 4 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated last year
- a pytorch implementation of self-attention with relative position representations☆50Updated 4 years ago