harvardnlp / struct-attn
Code for Structured Attention Networks https://arxiv.org/abs/1702.00887
☆238Updated 8 years ago
Alternatives and similar repositories for struct-attn
Users that are interested in struct-attn are comparing it to the libraries listed below
Sorting:
- Tensorflow implementation of Recursive Neural Networks using LSTM units☆136Updated 8 years ago
- ☆79Updated 8 years ago
- The implementation of key value memory networks in tensorflow☆248Updated 6 years ago
- Dynamic evaluation for pytorch language models, now includes hyperparameter tuning☆104Updated 7 years ago
- Gated Attention Reader for Text Comprehension☆188Updated 7 years ago
- attention model for entailment on SNLI corpus implemented in Tensorflow and Keras☆177Updated 8 years ago
- ☆149Updated 2 years ago
- Tensorflow implementation of Dynamic Coattention Networks for Question Answering.☆100Updated 8 years ago
- Hierarchical Encoder Decoder for Dialog Modelling☆95Updated 6 years ago
- Implementation of A Structured Self-attentive Sentence Embedding☆108Updated 6 years ago
- Top-down Tree LSTM (NAACL 2016) http://aclweb.org/anthology/N/N16/N16-1035.pdf☆83Updated 8 years ago
- Language Modeling☆156Updated 5 years ago
- Code to train state-of-the-art Neural Machine Translation systems.☆105Updated 8 years ago
- Dynamic Memory Network implementation in TensorFlow☆179Updated 6 years ago
- Decomposable Attention Model for Sentence Pair Classification (from https://arxiv.org/abs/1606.01933)☆95Updated 8 years ago
- Implementation of Attention-over-Attention Neural Networks for Reading Comprehension (https://arxiv.org/abs/1607.04423) in TensorFlow☆177Updated 8 years ago
- ☆144Updated 7 years ago
- code for Learning Structured Text Representations☆128Updated 7 years ago
- ☆134Updated 7 years ago
- NEG loss implemented in pytorch☆124Updated 7 years ago
- End-To-End Memory Networks in Theano☆130Updated 2 years ago
- Python code for training all models in the ICLR paper, "Towards Universal Paraphrastic Sentence Embeddings". These models achieve strong …☆192Updated 9 years ago
- ☆218Updated 9 years ago
- ☆179Updated 6 years ago
- This is an implementation of the Attention Sum Reader model as presented in "Text Comprehension with the Attention Sum Reader Network" av…☆98Updated 8 years ago
- NYU ML² work on sentence encoding with tree structure and dynamic graphs☆108Updated 6 years ago
- Hierarchical Recurrent Encoder Decoder for Query Suggestion☆108Updated 8 years ago
- Attention-based NMT with a coverage mechanism to indicate whether a source word is translated or not☆111Updated 5 years ago
- Code to accompany the paper "Learning Graphical State Transitions"☆170Updated 8 years ago
- TensorFlow implementation of Hierarchical Attention Networks for Document Classification and some extension☆94Updated 8 years ago