kpot / keras-transformerLinks
Keras library for building (Universal) Transformers, facilitating BERT and GPT models
☆537Updated 5 years ago
Alternatives and similar repositories for keras-transformer
Users that are interested in keras-transformer are comparing it to the libraries listed below
Sorting:
- A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need☆712Updated 3 years ago
- Transformer implemented in Keras☆371Updated 3 years ago
- Attention mechanism for processing sequential data that considers the context for each timestamp.☆655Updated 3 years ago
- A wrapper layer for stacking layers horizontally☆228Updated 3 years ago
- Keras implementation of BERT with pre-trained weights☆814Updated 5 years ago
- Keras Layer implementation of Attention for Sequential models☆441Updated 2 years ago
- Visualizing RNNs using the attention mechanism☆750Updated 6 years ago
- A simple technique to integrate BERT from tf hub to keras☆258Updated 2 years ago
- Neural Machine Translation with Keras☆530Updated 3 years ago
- How to use ELMo embeddings in Keras with Tensorflow Hub☆259Updated 6 years ago
- A Keras TensorFlow 2.0 implementation of BERT, ALBERT and adapter-BERT.☆807Updated 2 years ago
- A repository containing tutorials for practical NLP using PyTorch☆537Updated 5 years ago
- Multilabel classification for Toxic comments challenge using Bert☆311Updated 5 years ago
- A Tensorflow implementation of QANet for machine reading comprehension☆981Updated 7 years ago
- 🔡 Token level embeddings from BERT model on mxnet and gluonnlp☆452Updated 5 years ago
- Hierarchical Attention Networks for Document Classification in PyTorch☆605Updated 5 years ago
- Tensorflow implementation of contextualized word representations from bi-directional language models☆1,614Updated 2 years ago
- Re-implementation of ELMo on Keras☆133Updated 2 years ago
- Implementation of papers for text classification task on DBpedia☆737Updated 4 years ago
- BiLSTM-CNN-CRF architecture for sequence tagging using ELMo representations.☆387Updated 2 years ago
- Load GPT-2 checkpoint and generate texts☆127Updated 3 years ago
- Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction☆502Updated 4 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,181Updated 3 years ago
- Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN☆966Updated 6 years ago
- TensorFlow implementation of 'Attention Is All You Need (2017. 6)'☆348Updated 7 years ago
- RAdam implemented in Keras & TensorFlow☆325Updated 3 years ago
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,116Updated 2 years ago
- An open source framework for seq2seq models in PyTorch.☆1,509Updated last month
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆745Updated 3 years ago
- Empower Sequence Labeling with Task-Aware Language Model☆847Updated 3 years ago