uzaymacar / attention-mechanisms
Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
☆354Updated last year
Alternatives and similar repositories for attention-mechanisms:
Users that are interested in attention-mechanisms are comparing it to the libraries listed below
- A wrapper layer for stacking layers horizontally☆228Updated 3 years ago
- Attention mechanism for processing sequential data that considers the context for each timestamp.☆658Updated 3 years ago
- Keras Layer implementation of Attention for Sequential models☆443Updated 2 years ago
- Transformer implemented in Keras☆372Updated 3 years ago
- A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need☆712Updated 3 years ago
- Keras library for building (Universal) Transformers, facilitating BERT and GPT models☆537Updated 4 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆125Updated 3 years ago
- All about attention in neural networks. Soft attention, attention maps, local and global attention and multi-head attention.☆231Updated 5 years ago
- attention-based LSTM/Dense implemented by Keras☆298Updated 6 years ago
- An example attention network with simple dataset.☆230Updated 6 years ago
- Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.☆228Updated 5 years ago
- Minimal RNN classifier with self-attention in Pytorch☆150Updated 3 years ago
- Attention-based bidirectional LSTM for Classification Task (ICASSP)☆114Updated 2 years ago
- Implementation of Transformer Model in Tensorflow☆467Updated 2 years ago
- 自注意力与文本分类☆119Updated 6 years ago
- ☆76Updated 5 years ago
- A PyTorch implementation of the TCAN model in "Temporal Convolutional Attention-based Network For Sequence Modeling".☆140Updated 2 years ago
- Multi heads attention for image classification☆81Updated 6 years ago
- Implementation of the Transformer architecture described by Vaswani et al. in "Attention Is All You Need"☆28Updated 5 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆132Updated 5 years ago
- Layer normalization implemented in Keras☆60Updated 3 years ago
- Implementation of State-of-the-art Text Classification Models in Pytorch☆489Updated 6 years ago
- Keras Attention Layer (Luong and Bahdanau scores).☆2,808Updated last year
- ☆184Updated 7 years ago
- This repo aims to be a useful collection of notebooks/code for understanding and implementing seq2seq neural networks for time series for…☆607Updated 5 years ago
- A pytorch implementation of Fairseq Convolutional Sequence to Sequence Learning(Gehring et al. 2017)☆44Updated 6 years ago
- Visualizing RNNs using the attention mechanism☆749Updated 5 years ago
- [ICLR'19] Trellis Networks for Sequence Modeling☆472Updated 5 years ago
- Tensorflow Temporal Convolutional Network☆82Updated last year
- PyTorch implementation of batched bi-RNN encoder and attention-decoder.☆279Updated 6 years ago