kpot / keras-transformer
Keras library for building (Universal) Transformers, facilitating BERT and GPT models
☆536Updated 4 years ago
Alternatives and similar repositories for keras-transformer:
Users that are interested in keras-transformer are comparing it to the libraries listed below
- A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need☆713Updated 3 years ago
- Transformer implemented in Keras☆372Updated 3 years ago
- Attention mechanism for processing sequential data that considers the context for each timestamp.☆658Updated 3 years ago
- A wrapper layer for stacking layers horizontally☆228Updated 3 years ago
- Keras implementation of BERT with pre-trained weights☆814Updated 5 years ago
- Keras Layer implementation of Attention for Sequential models☆444Updated last year
- A simple technique to integrate BERT from tf hub to keras☆258Updated 2 years ago
- Visualizing RNNs using the attention mechanism☆749Updated 5 years ago
- Neural Machine Translation with Keras☆532Updated 3 years ago
- Re-implementation of ELMo on Keras☆134Updated last year
- A Keras TensorFlow 2.0 implementation of BERT, ALBERT and adapter-BERT.☆807Updated 2 years ago
- Load GPT-2 checkpoint and generate texts☆127Updated 3 years ago
- How to use ELMo embeddings in Keras with Tensorflow Hub☆260Updated 6 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,180Updated 3 years ago
- TensorFlow implementation of 'Attention Is All You Need (2017. 6)'☆348Updated 6 years ago
- Implementation of papers for text classification task on DBpedia☆737Updated 4 years ago
- Tensorflow implementation of contextualized word representations from bi-directional language models☆1,619Updated last year
- Hierarchical Attention Networks for Document Classification in PyTorch☆605Updated 4 years ago
- Tensorflow implementation of attention mechanism for text classification tasks.☆747Updated 5 years ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆695Updated 4 years ago
- Layer normalization implemented in Keras☆60Updated 3 years ago
- Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Process…☆250Updated 6 years ago
- A Tensorflow implementation of QANet for machine reading comprehension☆981Updated 6 years ago
- Deep Reinforcement Learning For Sequence to Sequence Models☆766Updated last year
- 🔡 Token level embeddings from BERT model on mxnet and gluonnlp☆452Updated 5 years ago
- Framework for building complex recurrent neural networks with Keras☆764Updated 2 years ago
- Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN☆962Updated 6 years ago
- This is where I put all my work in Natural Language Processing☆96Updated 3 years ago
- Attention-based sequence to sequence learning☆390Updated 5 years ago
- A repository containing tutorials for practical NLP using PyTorch☆533Updated 5 years ago