ap229997 / LanguageModel-using-Attention
Pytorch implementation of a basic language model using Attention in LSTM network
☆26Updated 6 years ago
Alternatives and similar repositories for LanguageModel-using-Attention
Users that are interested in LanguageModel-using-Attention are comparing it to the libraries listed below
Sorting:
- Repository for Attention Algorithm☆41Updated 7 years ago
- Hierarchical Attention Networks for Document Classification in PyTorch☆36Updated 6 years ago
- PyTorch implementation of batched bi-RNN encoder and attention-decoder.☆279Updated 6 years ago
- A PyTorch implementation of : Language Modeling with Gated Convolutional Networks.☆100Updated 3 years ago
- PTB Language Modelling task with LSTM + Attention layer☆31Updated 7 years ago
- A pytorch implementation of Fairseq Convolutional Sequence to Sequence Learning(Gehring et al. 2017)☆46Updated 6 years ago
- A multitask learning architecture for Natural Language Processing of Pytorch implementation☆41Updated 5 years ago
- Implementation of Hierarchical Attention Networks in PyTorch☆129Updated 6 years ago
- Tensorflow Implementation of Densely Connected Bidirectional LSTM with Applications to Sentence Classification☆47Updated 7 years ago
- A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head atte…☆145Updated 6 years ago
- document classification using LSTM + self attention☆112Updated 5 years ago
- NLSTM Nested LSTM in Pytorch☆18Updated 7 years ago
- Codes for "Towards Binary-Valued Gates for Robust LSTM Training".☆76Updated 6 years ago
- Bidirectional GRU with attention mechanism on imdb sentimental analysis dataset☆34Updated 7 years ago
- Sequence to Sequence and attention from scratch using Tensorflow☆29Updated 7 years ago
- A minimal nmt example to serve as an seq2seq+attention reference.☆36Updated 5 years ago
- Text classification based on LSTM on R8 dataset for pytorch implementation☆141Updated 7 years ago
- Implement en-fr translation task by implenting seq2seq, encoder-decoder in RNN layers with Attention mechanism and Beamsearch inference d…☆21Updated 7 years ago
- Text classification models: cnn, self-attention, cnn-rnf, rnn-att, capsule-net. TensorFlow. Single GPU or multi GPU☆19Updated 5 years ago
- SRU implement in pytorch(Training RNNs as Fast as CNNs)☆46Updated 2 years ago
- 6️⃣6️⃣6️⃣ Reproduce ICLR '18 under-reviewed paper "MULTI-TASK LEARNING ON MNIST IMAGE DATASETS"☆41Updated 6 years ago
- PyTorch DataLoader for seq2seq☆85Updated 6 years ago
- Keras implementation of “Gated Linear Unit ”☆23Updated last year
- ☆38Updated 7 years ago
- Code release for "Learning Multiple Tasks with Multilinear Relationship Networks" (NIPS 2017)☆70Updated 7 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆125Updated 3 years ago
- ☆12Updated 8 years ago
- Bi-Directional Block Self-Attention☆122Updated 7 years ago
- DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding☆26Updated 6 years ago
- Convolutional Variational Autoencoder☆34Updated 5 years ago