glample / rnn-benchmarks
Benchmarks for several RNN variations with different deep-learning frameworks
☆169Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for rnn-benchmarks
- ☆168Updated 8 years ago
- Code and models from the paper "Layer Normalization"☆245Updated 8 years ago
- Recurrent Neural Network for modeling sequential data implemented using Python and Theano.☆92Updated 9 years ago
- Torch implementation of seq2seq machine translation with GRU RNN and attention☆78Updated 7 years ago
- Implementations of "LSTM: A Search Space Odyssey" variants and their training results on the PTB dataset.☆96Updated 7 years ago
- Montréal Deep Learning Summer School 2016 material☆100Updated 8 years ago
- Generative Adversarial Networks with Keras☆156Updated 4 years ago
- TensorFlow implementation of normalizations such as Layer Normalization, HyperNetworks.☆112Updated 8 years ago
- Efficient layer normalization GPU kernel for Tensorflow☆111Updated 7 years ago
- This is a self contained software accompanying the paper titled: Learning Longer Memory in Recurrent Neural Networks: http://arxiv.org/ab…☆169Updated 6 years ago
- ☆122Updated 7 years ago
- This library provides utilities for creating and manipulating RNNs to model sequential data.☆192Updated 7 years ago
- ☆64Updated 7 years ago
- Recreating the Deep Residual Network in Lasagne☆118Updated 8 years ago
- Language Modeling☆156Updated 5 years ago
- ☆137Updated 7 years ago
- Fork of https://github.com/Lasagne/Lasagne☆64Updated 8 years ago
- Multi-GPU mini-framework for Theano☆195Updated 7 years ago
- ☆88Updated 9 years ago
- Torch7 implementation of Grid LSTM as described here: http://arxiv.org/pdf/1507.01526v2.pdf☆188Updated 8 years ago
- ☆165Updated 8 years ago
- End-To-End Memory Networks in Theano☆131Updated 2 years ago
- RecNet - Recurrent Neural Network Framework☆72Updated 7 years ago
- ☆69Updated 5 years ago
- bidirectional lstm☆153Updated 8 years ago
- ☆129Updated 8 years ago
- ☆144Updated 7 years ago
- Implementation of http://arxiv.org/abs/1511.05641 that lets one build a larger net starting from a smaller one.☆160Updated 7 years ago
- Implementation of the paper [Using Fast Weights to Attend to the Recent Past](https://arxiv.org/abs/1610.06258)☆172Updated 8 years ago