zimmerrol / attention-is-all-you-need-kerasLinks
Implementation of the Transformer architecture described by Vaswani et al. in "Attention Is All You Need"
☆28Updated 6 years ago
Alternatives and similar repositories for attention-is-all-you-need-keras
Users that are interested in attention-is-all-you-need-keras are comparing it to the libraries listed below
Sorting:
- An Attention Layer in Keras☆43Updated 6 years ago
- Position embedding layers in Keras☆58Updated 3 years ago
- Tensorflow Implementation of Densely Connected Bidirectional LSTM with Applications to Sentence Classification☆47Updated 7 years ago
- Layer normalization implemented in Keras☆60Updated 3 years ago
- My implementation of "Hierarchical Attention Networks for Document Classification" in Keras☆26Updated 7 years ago
- Collection of custom layers and utility functions for Keras which are missing in the main framework.☆62Updated 5 years ago
- ☆38Updated 8 years ago
- Tensorflow implementation of Semi-supervised Sequence Learning (https://arxiv.org/abs/1511.01432)☆81Updated 2 years ago
- Keras implementation of “Gated Linear Unit ”☆23Updated last year
- seq2seq attention in keras☆39Updated 6 years ago
- Multilingual hierarchical attention networks toolkit☆77Updated 5 years ago
- Simple Tensorflow Implementation of "A Structured Self-attentive Sentence Embedding" (ICLR 2017)☆91Updated 7 years ago
- Implement en-fr translation task by implenting seq2seq, encoder-decoder in RNN layers with Attention mechanism and Beamsearch inference d…☆21Updated 7 years ago
- attention block for keras Functional Model with only tensorflow backend☆26Updated 6 years ago
- Implement modern LSTM cell by tensorflow and test them by language modeling task for PTB. Highway State Gating, Hypernets, Recurrent High…☆30Updated 6 years ago
- Tensorflow implementation of "A Structured Self-Attentive Sentence Embedding"☆193Updated 3 years ago
- Bi-Directional Block Self-Attention☆122Updated 7 years ago
- Transformer-XL with checkpoint loader☆68Updated 3 years ago
- Implementation of Hierarchical Attention Networks as presented in https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf☆57Updated 7 years ago
- Multi-Task Learning in NLP☆94Updated 7 years ago
- For those who want to model the inter-sentence relations, pls use the more powerful code of "Attentive Convolution" in another repository☆90Updated 7 years ago
- (arXiv:1509.06664) Reasoning about Entailment with Neural Attention.☆44Updated 6 years ago
- RAdam optimizer for keras☆71Updated 5 years ago
- 自注意力与文本分类☆119Updated 6 years ago
- An implementation of Hierchical Attention Networks for Document Classification in Keras.☆46Updated 4 years ago
- Implementation of ULMFit algorithm for text classification via transfer learning☆94Updated 6 years ago
- CapsNet for NLP☆67Updated 6 years ago
- Kaggle Competition: Using deep learning to solve quora's question pairs problem☆54Updated 8 years ago
- QANet in keras (with Cove)☆66Updated 6 years ago
- A sequence-to-sequence framework of Keras-based generative attention mechanisms that humans can read.一个人类可以阅读的基于Keras的代注意力机制的序列到序列的框架/模型,…☆18Updated 6 years ago