JRC1995 / INTER-INTRA-attentions
View external linksLinks

An experimental custom seq-2-seq model with both layer-wise (inter-layer), and intra-layer attention (attention to previous hidden states of the same RNN unit) for abstractive summarization.
10Nov 30, 2017Updated 8 years ago

Alternatives and similar repositories for INTER-INTRA-attentions

Users that are interested in INTER-INTRA-attentions are comparing it to the libraries listed below

Sorting:

Are these results useful?