RMichaelSwan / MogrifierLSTMLinks
A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch
☆77Updated 4 years ago
Alternatives and similar repositories for MogrifierLSTM
Users that are interested in MogrifierLSTM are comparing it to the libraries listed below
Sorting:
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 5 years ago
- Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)☆84Updated 3 years ago
- ☆83Updated 5 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated last year
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- pytorch implementation of "A Theoretically Grounded Application of Dropout in Recurrent Neural Networks" LSTM(https://arxiv.org/abs/1512.…☆21Updated 4 years ago
- Multi-head attention in PyTorch☆152Updated 6 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆133Updated 5 years ago
- LAnguage Modelling Benchmarks☆137Updated 5 years ago
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆136Updated 4 years ago
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆31Updated 4 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆126Updated 3 years ago
- Two-Layer Hierarchical Softmax Implementation for PyTorch☆69Updated 4 years ago
- This in my Demo of Chen et al. "GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks" ICML 2018☆178Updated 3 years ago
- Custom loss functions to use in (mainly) PyTorch.☆39Updated 4 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 4 years ago
- document classification using LSTM + self attention☆112Updated 5 years ago
- NLSTM Nested LSTM in Pytorch☆17Updated 7 years ago
- code for Explicit Sparse Transformer☆62Updated last year
- Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.☆230Updated 5 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- A PyTorch implementation of the TCAN model in "Temporal Convolutional Attention-based Network For Sequence Modeling".☆141Updated 2 years ago
- ECML 2019: Graph Neural Networks for Multi-Label Classification☆90Updated 10 months ago
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 4 years ago
- Minimal RNN classifier with self-attention in Pytorch☆150Updated 3 years ago
- a pytorch implementation of self-attention with relative position representations☆50Updated 4 years ago
- This is a PyTorch implementation of the ICLR 2017 paper "HIERARCHICAL MULTISCALE RECURRENT NEURAL NETWORKS" (https://openreview.net/pdf?i…☆51Updated 7 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- ☆20Updated 5 years ago