lsdefine / attention-is-all-you-need-keras
A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need
☆709Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for attention-is-all-you-need-keras
- Transformer implemented in Keras☆370Updated 2 years ago
- Keras library for building (Universal) Transformers, facilitating BERT and GPT models☆534Updated 4 years ago
- Attention mechanism for processing sequential data that considers the context for each timestamp.☆656Updated 2 years ago
- A wrapper layer for stacking layers horizontally☆227Updated 2 years ago
- Keras implementation of BERT with pre-trained weights☆813Updated 5 years ago
- Visualizing RNNs using the attention mechanism☆747Updated 5 years ago
- TensorFlow implementation of 'Attention Is All You Need (2017. 6)'☆349Updated 6 years ago
- Tensorflow implementation of attention mechanism for text classification tasks.☆748Updated 4 years ago
- Keras Layer implementation of Attention for Sequential models☆444Updated last year
- An example attention network with simple dataset.☆230Updated 5 years ago
- some attention implements☆1,433Updated 4 years ago
- Hierarchical Attention Networks for Document Classification in PyTorch☆606Updated 4 years ago
- Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is currently unmaintained, issues will proba…☆463Updated 6 months ago
- Implementation of papers for text classification task on DBpedia☆737Updated 4 years ago
- Text classifier for Hierarchical Attention Networks for Document Classification☆1,069Updated 3 years ago
- Keras Attention Layer (Luong and Bahdanau scores).☆2,801Updated 11 months ago
- Attention-based sequence to sequence learning☆388Updated 5 years ago
- Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with Ten…☆344Updated 9 months ago
- Sequence to Sequence Models with PyTorch☆736Updated 2 years ago
- A Structured Self-attentive Sentence Embedding☆494Updated 5 years ago
- Neural Machine Translation with Keras☆533Updated 3 years ago
- Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN☆958Updated 5 years ago
- Implementation of Transformer Model in Tensorflow☆452Updated last year
- A Capsule Implement with Pure Keras☆350Updated 4 years ago
- Code of Directional Self-Attention Network (DiSAN)☆312Updated 6 years ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆690Updated 3 years ago
- Deep Reinforcement Learning For Sequence to Sequence Models☆765Updated last year
- Four styles of encoder decoder model by Python, Theano, Keras and Seq2Seq☆276Updated 7 years ago
- PyTorch implementation of batched bi-RNN encoder and attention-decoder.☆279Updated 5 years ago
- Multi-class metrics for Tensorflow☆225Updated 2 years ago