zimmerrol / attention-is-all-you-need-kerasLinks
Implementation of the Transformer architecture described by Vaswani et al. in "Attention Is All You Need"
☆28Updated 6 years ago
Alternatives and similar repositories for attention-is-all-you-need-keras
Users that are interested in attention-is-all-you-need-keras are comparing it to the libraries listed below
Sorting:
- An Attention Layer in Keras☆43Updated 6 years ago
- Collection of custom layers and utility functions for Keras which are missing in the main framework.☆62Updated 5 years ago
- Layer normalization implemented in Keras☆60Updated 3 years ago
- Tensorflow implementation of "A Structured Self-Attentive Sentence Embedding"☆194Updated 4 years ago
- Position embedding layers in Keras☆58Updated 3 years ago
- Tensorflow implementation of Semi-supervised Sequence Learning (https://arxiv.org/abs/1511.01432)☆82Updated 2 years ago
- Simple Tensorflow Implementation of "A Structured Self-attentive Sentence Embedding" (ICLR 2017)☆91Updated 7 years ago
- Sequence to Sequence and attention from scratch using Tensorflow☆29Updated 8 years ago
- Tensorflow Implementation of Densely Connected Bidirectional LSTM with Applications to Sentence Classification☆47Updated 7 years ago
- Tensorflow Implementation of Recurrent Neural Network (Vanilla, LSTM, GRU) for Text Classification☆118Updated 7 years ago
- My implementation of "Hierarchical Attention Networks for Document Classification" in Keras☆26Updated 7 years ago
- ☆38Updated 8 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆126Updated 4 years ago
- Implementation of Hierarchical Attention Networks as presented in https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf☆57Updated 7 years ago
- Re-implementation of ELMo on Keras☆134Updated 2 years ago
- A bidirectional LSTM with attention for multiclass/multilabel text classification.☆173Updated last year
- Keras implementation of “Gated Linear Unit ”☆23Updated last year
- Code of Directional Self-Attention Network (DiSAN)☆311Updated 7 years ago
- A wrapper layer for stacking layers horizontally☆228Updated 3 years ago
- 自注意力与文本分类☆119Updated 6 years ago
- QANet in keras (with Cove)☆66Updated 6 years ago
- TensorFlow implementation of 'Attention Is All You Need (2017. 6)'☆349Updated 7 years ago
- Mutli-label text classification using ConvNet and graph embedding (Tensorflow implementation)☆44Updated 2 years ago
- keras attentional bi-LSTM-CRF for Joint NLU (slot-filling and intent detection) with ATIS☆124Updated 7 years ago
- Text classification based on LSTM on R8 dataset for pytorch implementation☆141Updated 8 years ago
- Transformer-XL with checkpoint loader☆68Updated 3 years ago
- Text Classification by Convolutional Neural Network in Keras☆219Updated 7 years ago
- Implementation of Simple Recurrent Unit in Keras☆90Updated 7 years ago
- An implementation for attention model in Keras for sequence to sequence model.☆20Updated 7 years ago
- Implementation of Very Deep Convolutional Neural Network for Text Classification☆173Updated 3 years ago