titu1994 / keras-SRU
Implementation of Simple Recurrent Unit in Keras
☆89Updated 7 years ago
Alternatives and similar repositories for keras-SRU:
Users that are interested in keras-SRU are comparing it to the libraries listed below
- Keras implementation of Nested LSTMs☆89Updated 6 years ago
- Layer normalization implemented in Keras☆60Updated 3 years ago
- ☆24Updated 6 years ago
- Training RNNs as fast as CNNs. An unofficial tensorflow implementation.☆32Updated 6 years ago
- fairseq: Convolutional Sequence to Sequence Learning (Gehring et al. 2017) by Chainer☆64Updated 7 years ago
- Toy Keras implementation of a seq2seq model with examples.☆47Updated 4 years ago
- Implementation of LSTM GAN for twitter posts generating.☆30Updated 8 years ago
- Quasi-RNN for language modeling☆57Updated 8 years ago
- implement the paper" Very Deep Convolutional Networks for Natural Language Processing"(https://arxiv.org/abs/1606.01781 ) in tensorflow☆55Updated 7 years ago
- Implementation of IndRNN in Keras☆67Updated 4 years ago
- A CNN with an attentional module that I built while attending the brains minds and machines summer course☆68Updated 5 years ago
- Implementation of Hierarchical Attention Networks as presented in https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf☆57Updated 6 years ago
- Simple convolutional highway networks using TensorFlow.☆56Updated 8 years ago
- ☆38Updated 7 years ago
- Examples of using GridLSTM (and GridRNN in general) in tensorflow☆63Updated 7 years ago
- Implementing , learning and re implementing "End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF" in keras☆71Updated 7 years ago
- blstm-cws : Bi-directional LSTM for Chinese Word Segmentation☆45Updated 7 years ago
- Quasi-recurrent Neural Networks for Keras☆74Updated 7 years ago
- LSTM-Attention☆74Updated 7 years ago
- Keras implementation of [The unreasonable effectiveness of the forget gate](https://arxiv.org/abs/1804.04849)☆35Updated 5 years ago
- Keras implementation of “Gated Linear Unit ”☆23Updated 9 months ago
- Training RNNs as Fast as CNNs (Simple Recurrent Unit)☆30Updated 7 years ago
- Implementation of the Transformer architecture described by Vaswani et al. in "Attention Is All You Need"☆28Updated 5 years ago
- An implementation of LSTM with Recurrent Batch Normalization.☆19Updated 7 years ago
- sequence to sequence neural network model for dialogue system☆44Updated 8 years ago
- ☆65Updated 2 years ago
- Library to train parallel-aligned sequence data based on Keras☆51Updated 7 years ago
- Character-Aware Neural Language Models. A keras-based implementation☆118Updated 3 years ago
- Multilingual hierarchical attention networks toolkit☆77Updated 5 years ago
- The experiment result of LSTM language models on PTB (Penn Treebank) and GBW (Google Billion Word) using AdaptiveSoftmax on TensorFlow.☆100Updated 6 years ago