RMichaelSwan / MogrifierLSTM
A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch
☆74Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for MogrifierLSTM
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 4 years ago
- ☆83Updated 5 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆87Updated last year
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆32Updated 4 years ago
- Custom loss functions to use in (mainly) PyTorch.☆37Updated 4 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- Simple implement dilated LSTM, residual LSTM and Attention LSTM (follow the corresponding papers).☆17Updated 4 years ago
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 3 years ago
- NLSTM Nested LSTM in Pytorch☆18Updated 6 years ago
- Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)☆83Updated 2 years ago
- Multi-head attention in PyTorch☆148Updated 5 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆133Updated 4 years ago
- ECML 2019: Graph Neural Networks for Multi-Label Classification☆89Updated 4 months ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- document classification using LSTM + self attention☆112Updated 5 years ago
- LAnguage Modelling Benchmarks☆137Updated 4 years ago
- Code and data for the paper "Multi-Source Domain Adaptation with Mixture of Experts" (EMNLP 2018)☆64Updated 4 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆71Updated last year
- A pytorch implementation of Fairseq Convolutional Sequence to Sequence Learning(Gehring et al. 2017)☆44Updated 5 years ago
- Multi-Source Domain Attention☆13Updated 4 years ago
- code for Explicit Sparse Transformer☆56Updated last year
- The implementation of Meta-LSTM in "Meta Multi-Task Learning for Sequence Modeling." AAAI-18☆33Updated 6 years ago
- Implements Reformer: The Efficient Transformer in pytorch.☆84Updated 4 years ago
- Variational Transformers for Diverse Response Generation☆82Updated 3 months ago
- Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)☆41Updated 4 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆122Updated 3 years ago
- PyTorch implementation of the paper "Hyperbolic Interaction Model For Hierarchical Multi-Label Classification"☆48Updated 5 years ago
- ☆32Updated 4 years ago
- PyTorch Implementation of TCN☆20Updated 5 years ago
- Codes for Causal Semantic Generative model (CSG), the model proposed in "Learning Causal Semantic Representation for Out-of-Distribution …☆73Updated 2 years ago