rawmarshmellows / pytorch-batch-luong-attentionLinks
☆16Updated 6 years ago
Alternatives and similar repositories for pytorch-batch-luong-attention
Users that are interested in pytorch-batch-luong-attention are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of batched bi-RNN encoder and attention-decoder.☆280Updated 6 years ago
- A pytorch implementation of Fairseq Convolutional Sequence to Sequence Learning(Gehring et al. 2017)☆46Updated 6 years ago
- ☆76Updated 5 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆126Updated 3 years ago
- Sequence to Sequence Models with PyTorch☆26Updated 7 years ago
- PyTorch implementation of beam search decoding for seq2seq models☆337Updated 2 years ago
- Using Pytorch's nn.Transformer module to create an english to french neural machine translation model.☆78Updated 4 years ago
- document classification using LSTM + self attention☆113Updated 5 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆133Updated 5 years ago
- Sequence to Sequence Models in PyTorch☆44Updated 11 months ago
- Minimal RNN classifier with self-attention in Pytorch☆150Updated 3 years ago
- Two-Layer Hierarchical Softmax Implementation for PyTorch☆69Updated 4 years ago
- PyTorch implementation of "Effective Approaches to Attention-based Neural Machine Translation" using scheduled sampling to improve the pa…☆38Updated 7 years ago
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆78Updated 4 years ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆701Updated 4 years ago
- Text classification based on LSTM on R8 dataset for pytorch implementation☆141Updated 7 years ago
- Simple implement dilated LSTM, residual LSTM and Attention LSTM (follow the corresponding papers).☆17Updated 5 years ago
- Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.☆230Updated 6 years ago
- Semi Supervised Learning for Text-Classification☆83Updated 6 years ago
- Reproducing Character-Level-Language-Modeling with Deeper Self-Attention in PyTorch☆61Updated 6 years ago
- pytorch implementation of Attention is all you need☆238Updated 4 years ago
- Multi-head attention in PyTorch☆153Updated 6 years ago
- PyTorch implementation for Seq2Seq model with attention and Greedy Search / Beam Search for neural machine translation☆58Updated 4 years ago
- Highway Networks implement in pytorch☆71Updated 2 years ago
- A multitask learning architecture for Natural Language Processing of Pytorch implementation☆41Updated 5 years ago
- DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding☆26Updated 6 years ago
- Encoding position with the word embeddings.☆83Updated 7 years ago
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆136Updated 4 years ago
- Implementation of Universal Transformer in Pytorch☆261Updated 6 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago