philipperemy / keras-attention
Keras Attention Layer (Luong and Bahdanau scores).
☆2,806Updated last year
Alternatives and similar repositories for keras-attention:
Users that are interested in keras-attention are comparing it to the libraries listed below
- Visualizing RNNs using the attention mechanism☆749Updated 5 years ago
- Attention mechanism for processing sequential data that considers the context for each timestamp.☆658Updated 3 years ago
- Sequence to Sequence Learning with Keras☆3,170Updated 2 years ago
- A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need☆713Updated 3 years ago
- Keras Layer implementation of Attention for Sequential models☆444Updated last year
- Tensorflow implementation of attention mechanism for text classification tasks.☆747Updated 5 years ago
- Keras library for building (Universal) Transformers, facilitating BERT and GPT models☆536Updated 4 years ago
- Framework for building complex recurrent neural networks with Keras☆764Updated 2 years ago
- Keras implementation of BERT with pre-trained weights☆814Updated 5 years ago
- some attention implements☆1,440Updated 5 years ago
- Layers Outputs and Gradients in Keras. Made easy.☆1,055Updated 6 months ago
- Transformer implemented in Keras☆372Updated 3 years ago
- Visualization Toolbox for Long Short Term Memory networks (LSTMs)☆1,227Updated 3 years ago
- Implementation of Sequence Generative Adversarial Nets with Policy Gradient☆2,090Updated 5 years ago
- Keras community contributions☆1,577Updated 2 years ago
- attention-based LSTM/Dense implemented by Keras☆297Updated 6 years ago
- LSTM and QRNN Language Model Toolkit for PyTorch☆1,965Updated 3 years ago
- An open source framework for seq2seq models in PyTorch.☆1,506Updated 2 years ago
- A TensorFlow Implementation of the Transformer: Attention Is All You Need☆4,328Updated last year
- Go to https://github.com/pytorch/tutorials - this repo is deprecated and no longer maintained☆4,533Updated 3 years ago
- A wrapper layer for stacking layers horizontally☆228Updated 3 years ago
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆2,102Updated 3 years ago
- A Keras implementation of CapsNet in NIPS2017 paper "Dynamic Routing Between Capsules". Now test error = 0.34%.☆2,464Updated 4 years ago
- Text classifier for Hierarchical Attention Networks for Document Classification☆1,071Updated 3 years ago
- Implementation of papers for text classification task on DBpedia☆737Updated 4 years ago
- Dynamic seq2seq in TensorFlow, step by step☆996Updated 7 years ago
- Sequence modeling benchmarks and temporal convolutional networks☆4,244Updated 2 years ago
- Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granul…☆1,533Updated last year
- Implementation of BERT that could load official pre-trained models for feature extraction and prediction☆2,423Updated 3 years ago
- Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with Ten…☆353Updated last year