monk1337 / Various-Attention-mechanisms
This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention etc in Pytorch, Tensorflow, Keras
☆125Updated 3 years ago
Alternatives and similar repositories for Various-Attention-mechanisms:
Users that are interested in Various-Attention-mechanisms are comparing it to the libraries listed below
- document classification using LSTM + self attention☆113Updated 5 years ago
- A pytorch implementation of Fairseq Convolutional Sequence to Sequence Learning(Gehring et al. 2017)☆44Updated 6 years ago
- Implementation of the Transformer architecture described by Vaswani et al. in "Attention Is All You Need"☆28Updated 5 years ago
- Tensorflow Implementation of Densely Connected Bidirectional LSTM with Applications to Sentence Classification☆47Updated 6 years ago
- PyTorch implementation of batched bi-RNN encoder and attention-decoder.☆279Updated 6 years ago
- Minimal RNN classifier with self-attention in Pytorch☆150Updated 3 years ago
- Hierarchical Attention Networks for Document Classification in PyTorch☆36Updated 6 years ago
- Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with Ten…☆354Updated last year
- Multi heads attention for image classification☆81Updated 6 years ago
- pytorch neural network attention mechanism☆147Updated 6 years ago
- Keras implement of ON-LSTM (Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks)☆155Updated 5 years ago
- A wrapper layer for stacking layers horizontally☆228Updated 3 years ago
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆76Updated 4 years ago
- Collection of custom layers and utility functions for Keras which are missing in the main framework.☆62Updated 4 years ago
- ☆16Updated 6 years ago
- NLSTM Nested LSTM in Pytorch☆18Updated 6 years ago
- Beam search for neural network sequence to sequence (encoder-decoder) models.☆34Updated 5 years ago
- SRU implement in pytorch(Training RNNs as Fast as CNNs)☆46Updated 2 years ago
- ☆38Updated 7 years ago
- 利用keras搭建的胶囊网络(capsule network文本分类模型,包含RNN、CNN、HAN等,其中keras_utils包含了capsule层和attention层的keras实现☆77Updated 6 years ago
- Conditional Sequence Generative Adversarial Network trained with policy gradient, Implementation in Tensorflow☆49Updated 6 years ago
- Word Embedding + LSTM + FC☆161Updated 8 months ago
- Text classification based on LSTM on R8 dataset for pytorch implementation☆141Updated 7 years ago
- An example attention network with simple dataset.☆230Updated 6 years ago
- An Implementation of Encoder-Decoder model with global attention mechanism.☆32Updated 5 years ago
- TensorFlow implementation of the paper `Adversarial Multi-task Learning for Text Classification`☆177Updated 6 years ago
- Tensorflow implementation of Semi-supervised Sequence Learning (https://arxiv.org/abs/1511.01432)☆81Updated 2 years ago
- An Attention Layer in Keras☆43Updated 5 years ago
- Bi-Directional Block Self-Attention☆123Updated 6 years ago
- 注意力机制on自然语言处理文章整理笔记☆170Updated 6 years ago