lsdefine / attention-is-all-you-need-keras
A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need
☆712Updated 3 years ago
Alternatives and similar repositories for attention-is-all-you-need-keras:
Users that are interested in attention-is-all-you-need-keras are comparing it to the libraries listed below
- Keras library for building (Universal) Transformers, facilitating BERT and GPT models☆537Updated 4 years ago
- Transformer implemented in Keras☆371Updated 3 years ago
- Attention mechanism for processing sequential data that considers the context for each timestamp.☆655Updated 3 years ago
- Keras implementation of BERT with pre-trained weights☆814Updated 5 years ago
- A wrapper layer for stacking layers horizontally☆228Updated 3 years ago
- Tensorflow implementation of attention mechanism for text classification tasks.☆748Updated 5 years ago
- TensorFlow implementation of 'Attention Is All You Need (2017. 6)'☆348Updated 6 years ago
- Visualizing RNNs using the attention mechanism☆749Updated 5 years ago
- Hierarchical Attention Networks for Document Classification in PyTorch☆604Updated 5 years ago
- An example attention network with simple dataset.☆230Updated 6 years ago
- Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is currently unmaintained, issues will proba…☆465Updated 11 months ago
- some attention implements☆1,446Updated 5 years ago
- Implementation of papers for text classification task on DBpedia☆737Updated 4 years ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆699Updated 4 years ago
- Text classifier for Hierarchical Attention Networks for Document Classification☆1,077Updated 3 years ago
- Attention-based sequence to sequence learning☆391Updated 5 years ago
- Keras Attention Layer (Luong and Bahdanau scores).☆2,810Updated last year
- A Tensorflow implementation of QANet for machine reading comprehension☆981Updated 6 years ago
- Four styles of encoder decoder model by Python, Theano, Keras and Seq2Seq☆277Updated 7 years ago
- Tensorflow implementation of contextualized word representations from bi-directional language models☆1,619Updated 2 years ago
- Empower Sequence Labeling with Task-Aware Language Model☆847Updated 2 years ago
- A Structured Self-attentive Sentence Embedding☆492Updated 5 years ago
- Code of Directional Self-Attention Network (DiSAN)☆312Updated 6 years ago
- Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN☆964Updated 6 years ago
- Sequence to Sequence Models with PyTorch☆738Updated 3 years ago
- Neural Machine Translation with Keras☆531Updated 3 years ago
- Sequence to sequence learning using TensorFlow.☆389Updated 7 years ago
- BiLSTM-CNN-CRF architecture for sequence tagging☆830Updated 3 years ago
- An open-source implementation of the paper ``A Structured Self-Attentive Sentence Embedding'' (Lin et al., ICLR 2017).☆430Updated 7 years ago
- Implementation of ABCNN(Attention-Based Convolutional Neural Network) on Tensorflow☆277Updated 6 years ago