leimao / Two-Layer-Hierarchical-Softmax-PyTorchLinks
Two-Layer Hierarchical Softmax Implementation for PyTorch
☆70Updated 4 years ago
Alternatives and similar repositories for Two-Layer-Hierarchical-Softmax-PyTorch
Users that are interested in Two-Layer-Hierarchical-Softmax-PyTorch are comparing it to the libraries listed below
Sorting:
- PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset☆123Updated 6 years ago
- PyTorch DataLoader for seq2seq☆85Updated 6 years ago
- Adaptive Softmax implementation for PyTorch☆81Updated 6 years ago
- Checking the interpretability of attention on text classification models☆49Updated 6 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated 2 years ago
- LAnguage Modelling Benchmarks☆138Updated 5 years ago
- NeurIPS 2019 - Learning Data Manipulation for Augmentation and Weighting☆110Updated 5 years ago
- ☆93Updated 4 years ago
- Minimal RNN classifier with self-attention in Pytorch☆152Updated 3 years ago
- Encoding position with the word embeddings.☆84Updated 7 years ago
- LAMB Optimizer for Large Batch Training (TensorFlow version)☆121Updated 5 years ago
- Implementing Skip-gram Negative Sampling with pytorch☆49Updated 7 years ago
- Visualization for simple attention and Google's multi-head attention.☆68Updated 7 years ago
- DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding☆26Updated 7 years ago
- Latent Alignment and Variational Attention☆327Updated 6 years ago
- ☆84Updated 5 years ago
- Re-implement "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"☆120Updated 6 years ago
- ☆219Updated 5 years ago
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆137Updated 4 years ago
- Highway network implemented in pytorch☆80Updated 8 years ago
- pytorch neural network attention mechanism☆147Updated 6 years ago
- A complete pytorch implementation of skip-gram☆193Updated 8 years ago
- ☆97Updated 5 years ago
- PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"☆271Updated 3 years ago
- Reproducing Character-Level-Language-Modeling with Deeper Self-Attention in PyTorch☆62Updated 6 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆134Updated 5 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- ☆53Updated 5 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆130Updated 4 years ago
- Source Code for DialogWAE: Multimodal Response Generation with Conditional Wasserstein Autoencoder (https://arxiv.org/abs/1805.12352)☆126Updated 7 years ago