leimao / Two-Layer-Hierarchical-Softmax-PyTorchLinks
Two-Layer Hierarchical Softmax Implementation for PyTorch
☆69Updated 4 years ago
Alternatives and similar repositories for Two-Layer-Hierarchical-Softmax-PyTorch
Users that are interested in Two-Layer-Hierarchical-Softmax-PyTorch are comparing it to the libraries listed below
Sorting:
- Encoding position with the word embeddings.☆83Updated 7 years ago
- Implementing Skip-gram Negative Sampling with pytorch☆49Updated 6 years ago
- Reproducing Character-Level-Language-Modeling with Deeper Self-Attention in PyTorch☆61Updated 6 years ago
- Re-implement "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"☆120Updated 6 years ago
- PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset☆123Updated 5 years ago
- Source Code for DialogWAE: Multimodal Response Generation with Conditional Wasserstein Autoencoder (https://arxiv.org/abs/1805.12352)☆125Updated 6 years ago
- LAnguage Modelling Benchmarks☆137Updated 5 years ago
- PyTorch DataLoader for seq2seq☆85Updated 6 years ago
- NeurIPS 2019 - Learning Data Manipulation for Augmentation and Weighting☆109Updated 4 years ago
- DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding☆26Updated 6 years ago
- Semi Supervised Learning for Text-Classification☆83Updated 6 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated last year
- Hierarchical Attention Networks for Document Classification in PyTorch☆36Updated 6 years ago
- pytorch neural network attention mechanism☆147Updated 6 years ago
- ☆83Updated 5 years ago
- Minimal RNN classifier with self-attention in Pytorch☆150Updated 3 years ago
- a pytorch implementation of self-attention with relative position representations☆50Updated 4 years ago
- Adaptive Softmax implementation for PyTorch☆81Updated 6 years ago
- ☆53Updated 4 years ago
- Code for EMNLP 2019 paper "Attention is not not Explanation"☆58Updated 4 years ago
- Checking the interpretability of attention on text classification models☆49Updated 5 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- ☆93Updated 3 years ago
- Highway Networks implement in pytorch☆71Updated 2 years ago
- ☆41Updated 9 years ago
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆82Updated 6 years ago
- ☆218Updated 5 years ago
- Visualization for simple attention and Google's multi-head attention.☆67Updated 7 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆152Updated 2 years ago
- Implementation of Densely Connected Attention Propagation for Reading Comprehension (NIPS 2018)☆69Updated 6 years ago