lsdefine / attention-is-all-you-need-kerasLinks
A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need
☆712Updated 4 years ago
Alternatives and similar repositories for attention-is-all-you-need-keras
Users that are interested in attention-is-all-you-need-keras are comparing it to the libraries listed below
Sorting:
- Keras library for building (Universal) Transformers, facilitating BERT and GPT models☆541Updated 5 years ago
- Transformer implemented in Keras☆369Updated 3 years ago
- Attention mechanism for processing sequential data that considers the context for each timestamp.☆658Updated 3 years ago
- Keras implementation of BERT with pre-trained weights☆816Updated 6 years ago
- Visualizing RNNs using the attention mechanism☆751Updated 6 years ago
- Keras Layer implementation of Attention for Sequential models☆442Updated 2 years ago
- A wrapper layer for stacking layers horizontally☆229Updated 3 years ago
- Hierarchical Attention Networks for Document Classification in PyTorch☆607Updated 5 years ago
- Tensorflow implementation of attention mechanism for text classification tasks.☆747Updated 5 years ago
- TensorFlow implementation of 'Attention Is All You Need (2017. 6)'☆349Updated 7 years ago
- Text classifier for Hierarchical Attention Networks for Document Classification☆1,081Updated 4 years ago
- An example attention network with simple dataset.☆228Updated 6 years ago
- Implementation of papers for text classification task on DBpedia☆737Updated 5 years ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆702Updated 5 years ago
- Neural Machine Translation with Keras☆530Updated 4 years ago
- Attention-based sequence to sequence learning☆388Updated 6 years ago
- Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is currently unmaintained, issues will proba…☆467Updated last year
- Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN☆969Updated 6 years ago
- A Capsule Implement with Pure Keras☆351Updated 5 years ago
- some attention implements☆1,450Updated 6 years ago
- Text autoencoder with LSTMs☆262Updated 6 years ago
- Sequence to Sequence Models with PyTorch☆741Updated 3 years ago
- Keras Attention Layer (Luong and Bahdanau scores).☆2,814Updated 2 years ago
- Four styles of encoder decoder model by Python, Theano, Keras and Seq2Seq☆279Updated 8 years ago
- A Tensorflow implementation of QANet for machine reading comprehension☆983Updated 7 years ago
- 自注意力与文本分类☆119Updated 7 years ago
- Tensorflow implementation of contextualized word representations from bi-directional language models☆1,613Updated 2 years ago
- Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with Ten…☆362Updated last year
- PyTorch implementation of batched bi-RNN encoder and attention-decoder.☆281Updated 6 years ago
- A Keras TensorFlow 2.0 implementation of BERT, ALBERT and adapter-BERT.☆810Updated 2 years ago