kpot / keras-transformer
Keras library for building (Universal) Transformers, facilitating BERT and GPT models
☆533Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for keras-transformer
- A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need☆709Updated 3 years ago
- Transformer implemented in Keras☆371Updated 2 years ago
- Attention mechanism for processing sequential data that considers the context for each timestamp.☆657Updated 2 years ago
- Keras implementation of BERT with pre-trained weights☆813Updated 5 years ago
- A wrapper layer for stacking layers horizontally☆227Updated 2 years ago
- Keras Layer implementation of Attention for Sequential models☆445Updated last year
- Visualizing RNNs using the attention mechanism☆748Updated 5 years ago
- A simple technique to integrate BERT from tf hub to keras☆258Updated last year
- Re-implementation of ELMo on Keras☆135Updated last year
- Neural Machine Translation with Keras☆533Updated 3 years ago
- A Keras TensorFlow 2.0 implementation of BERT, ALBERT and adapter-BERT.☆802Updated last year
- TensorFlow implementation of 'Attention Is All You Need (2017. 6)'☆350Updated 6 years ago
- How to use ELMo embeddings in Keras with Tensorflow Hub☆261Updated 5 years ago
- Load GPT-2 checkpoint and generate texts☆128Updated 2 years ago
- Implementation of papers for text classification task on DBpedia☆738Updated 4 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,179Updated 2 years ago
- Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.☆224Updated 5 years ago
- Hierarchical Attention Networks for Document Classification in PyTorch☆606Updated 4 years ago
- Deep Reinforcement Learning For Sequence to Sequence Models☆765Updated last year
- Keras Attention Layer (Luong and Bahdanau scores).☆2,802Updated last year
- Text classifier for Hierarchical Attention Networks for Document Classification☆1,070Updated 3 years ago
- Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with Ten…☆347Updated 9 months ago
- Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction☆502Updated 3 years ago
- Implementation of XLNet that can load pretrained checkpoints☆172Updated 2 years ago
- Framework for building complex recurrent neural networks with Keras☆765Updated 2 years ago
- An example attention network with simple dataset.☆230Updated 5 years ago
- Multilabel classification for Toxic comments challenge using Bert☆311Updated 5 years ago
- Collection of custom layers and utility functions for Keras which are missing in the main framework.☆62Updated 4 years ago
- Attention-based sequence to sequence learning☆389Updated 5 years ago
- Layer normalization implemented in Keras☆60Updated 2 years ago