CyberZHG / keras-self-attention
Attention mechanism for processing sequential data that considers the context for each timestamp.
☆656Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for keras-self-attention
- A wrapper layer for stacking layers horizontally☆227Updated 2 years ago
- A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need☆708Updated 3 years ago
- Transformer implemented in Keras☆370Updated 2 years ago
- Keras library for building (Universal) Transformers, facilitating BERT and GPT models☆533Updated 4 years ago
- Keras Layer implementation of Attention for Sequential models☆444Updated last year
- Keras Attention Layer (Luong and Bahdanau scores).☆2,801Updated last year
- An example attention network with simple dataset.☆230Updated 5 years ago
- Visualizing RNNs using the attention mechanism☆747Updated 5 years ago
- Keras implementation of BERT with pre-trained weights☆813Updated 5 years ago
- Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with Ten…☆345Updated 9 months ago
- attention-based LSTM/Dense implemented by Keras☆294Updated 6 years ago
- 自注意力与文本分类☆119Updated 6 years ago
- A simple technique to integrate BERT from tf hub to keras☆258Updated last year
- Named-Entity-Recognition-with-Bidirectional-LSTM-CNNs☆359Updated 4 years ago
- Attention-based bidirectional LSTM for Classification Task (ICASSP)☆108Updated last year
- Implementation of papers for text classification task on DBpedia☆737Updated 4 years ago
- An implementation of a sequence to sequence neural network using an encoder-decoder☆208Updated 5 years ago
- Tensorflow implementation of attention mechanism for text classification tasks.☆748Updated 4 years ago
- Collection of custom layers and utility functions for Keras which are missing in the main framework.☆62Updated 4 years ago
- some attention implements☆1,433Updated 5 years ago
- Re-implementation of ELMo on Keras☆135Updated last year
- TensorFlow implementation of 'Attention Is All You Need (2017. 6)'☆349Updated 6 years ago
- Text classification using deep learning models in Pytorch☆809Updated 6 years ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆690Updated 3 years ago
- Layer normalization implemented in Keras☆60Updated 2 years ago
- How to use ELMo embeddings in Keras with Tensorflow Hub☆261Updated 5 years ago
- A bidirectional LSTM with attention for multiclass/multilabel text classification.☆172Updated 2 months ago
- Binary and Categorical Focal loss implementation in Keras.☆278Updated last year
- Neural Machine Translation with Keras☆533Updated 3 years ago
- A Keras TensorFlow 2.0 implementation of BERT, ALBERT and adapter-BERT.☆802Updated last year