yangperasd / gated_cnn
Keras implementation of “Gated Linear Unit ”
☆23Updated 9 months ago
Alternatives and similar repositories for gated_cnn:
Users that are interested in gated_cnn are comparing it to the libraries listed below
- Adaptive embedding and softmax☆17Updated 3 years ago
- Official code of our work, Robust, Transferable Sentence Representations for Text Classification [Arxiv 2018].☆21Updated 6 years ago
- Leveraging Local and Global Patterns for Self-Attention Networks☆12Updated 5 years ago
- Collection of custom layers and utility functions for Keras which are missing in the main framework.☆62Updated 4 years ago
- Tensorflow Implementation of Densely Connected Bidirectional LSTM with Applications to Sentence Classification☆47Updated 6 years ago
- Position embedding layers in Keras☆58Updated 3 years ago
- Implement en-fr translation task by implenting seq2seq, encoder-decoder in RNN layers with Attention mechanism and Beamsearch inference d…☆21Updated 7 years ago
- Implementation of the Transformer architecture described by Vaswani et al. in "Attention Is All You Need"☆28Updated 5 years ago
- Bi-Directional Block Self-Attention☆123Updated 6 years ago
- TensorFlow implementation of "Improved Variational Autoencoders for Text Modeling using Dilated Convolutions"☆54Updated 5 years ago
- The implementation of Meta-LSTM in "Meta Multi-Task Learning for Sequence Modeling." AAAI-18☆33Updated 6 years ago
- Ordered Neurons LSTM☆30Updated 3 years ago
- Source code of Knowledge Enhanced Hybrid Neural Network for Text Matching☆17Updated 6 years ago
- attention block for keras Functional Model with only tensorflow backend☆26Updated 5 years ago
- Reference Implementation for WSDM 2018 Paper "Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering"☆67Updated 6 years ago
- Transformer-XL with checkpoint loader☆68Updated 3 years ago
- CapsNet for NLP☆67Updated 6 years ago
- This released code corresponds to TACL paper "attentive convolution". Attentive Convolution aims to generate a vector for two sentences.☆104Updated 7 years ago
- Code for "Strong Baselines for Neural Semi-supervised Learning under Domain Shift" (Ruder & Plank, 2018 ACL)☆61Updated 2 years ago
- Simple Tensorflow Implementation of "A Structured Self-attentive Sentence Embedding" (ICLR 2017)☆91Updated 6 years ago
- Layer normalization implemented in Keras☆60Updated 3 years ago
- Multiple Different Natural Language Processing Tasks in a Single Deep Model☆48Updated 6 years ago
- Quasi-RNN for language modeling☆57Updated 8 years ago
- Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".☆21Updated 7 years ago
- Experiments using feedforward networks with attention☆47Updated 8 years ago
- ☆23Updated 2 years ago
- ☆38Updated 7 years ago
- Keras library for building (Universal) Transformers, facilitating BERT and GPT models☆11Updated 6 years ago
- Adapt Capsule Network for Name Entity Recognition Task☆11Updated 5 years ago
- An Attention Layer in Keras☆43Updated 5 years ago