ifding / seq2seq-pytorch
Sequence to Sequence Models with PyTorch
☆27Updated 6 years ago
Related projects: ⓘ
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆123Updated 2 years ago
- A pytorch implementation of Fairseq Convolutional Sequence to Sequence Learning(Gehring et al. 2017)☆43Updated 5 years ago
- ☆16Updated 6 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆133Updated 4 years ago
- ☆76Updated 4 years ago
- Sequence to Sequence Models in PyTorch☆44Updated last month
- LSTM Classification using Pytorch☆66Updated 5 years ago
- (minimal implementation) BiLSTM-Attention for Relation Classification☆29Updated 5 years ago
- [IN PROGRESS] An introduction to generative adversarial networks (GANs) and variational autoencoders (VAEs) in PyTorch, by implementing a…☆29Updated 5 years ago
- An Implementation of Encoder-Decoder model with global attention mechanism.☆30Updated 5 years ago
- Implementing Skip-gram Negative Sampling with pytorch☆49Updated 6 years ago
- NLSTM Nested LSTM in Pytorch☆18Updated 6 years ago
- Multi heads attention for image classification☆80Updated 6 years ago
- RNN Encoder-Decoder in PyTorch☆40Updated last month
- seq2seq attention in keras☆40Updated 5 years ago
- attention mechanism in keras, like Dense and RNN...☆20Updated 6 years ago
- Reproducing Character-Level-Language-Modeling with Deeper Self-Attention in PyTorch☆59Updated 5 years ago
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆73Updated 4 years ago
- Simple implement dilated LSTM, residual LSTM and Attention LSTM (follow the corresponding papers).☆17Updated 4 years ago
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆136Updated 3 years ago
- Repository for Attention Algorithm☆41Updated 6 years ago
- Pytorch implementation of Dauphin et al. (2016) "Language Modeling with Gated Convolutional Networks"☆29Updated last year
- Implementation of "Attention is All You Need" paper☆32Updated last month
- This project aims to give you an introduction to how Seq2Seq based encoder-decoder neural network architectures can be applied on time se…☆41Updated 5 years ago
- PyTorch implementation of batched GRU encoder and decoder.☆30Updated 6 years ago
- Encoding position with the word embeddings.☆83Updated 6 years ago
- Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.☆224Updated 5 years ago
- A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction☆110Updated 2 months ago
- Implementation of the Transformer architecture described by Vaswani et al. in "Attention Is All You Need"☆28Updated 5 years ago
- ☆10Updated 5 years ago